Share
New Math Framework Could Give Birth to Super-Intelligent and Sentient AI

New Math Framework Could Give Birth to Super-Intelligent and Sentient AI

New developments in the latest technology in the world of AI is one of the topics we cover with the greatest regularity here. The relatively straightforward reason for that is they currently happen with great regularity.

However, a recent white paper written by retired professor Daniel J. Buehrer and published by the Cornell University Library is particularly interesting. Professor Buehrer proposes that a new class of calculus that could, in theory, if it is proven to work, lead to the rise of genuinely sentient AI. The kind of super-AI the calculus would embody, would in effect be a self-learning, potentially all-encompassing machine. The algorithm would allow the AI to describe and improve its own learning processes. This, says Buehrer, would mean the theoretical AI would effectively be a sentient being.

Professor Buehrer’s paper describes a mathematical method for organising the different forms of AI-learning under one umbrella algorithm. What is described bears a remarkable resemblance to Pedro Domingos’s 2015 book, The Master Algorithm. One of AI’s leading lights, Domingos explains in terms understandable to the layman how the machine learning code that is used by Google, Amazon and our smartphones is developing. He charts a theoretical course towards the point where the quest to create the ultimate artificial software ‘learner’ could one day completely change human reality.

Buehrer’s self-teaching class of calculus could learn from, control and manage interconnected AIs of different specialisations. It would, over time, grow exponentially more intelligent as it fed on this continually updated network. The AI ‘manager’ and its staff of subordinate AIs could exist in a feedback loop that would eventually result in ‘machine consciousness’.

The professor cautions that the development of the kind of AI built on his theoretical new class of calculus would necessitate careful checks and balances. He suggests that trials should be conducted on ‘read-only’ hardware to mitigate the risk of a Super-Intelligent AI reaching the point it would start to write its own new code to make the short leap into genuine sentience. He goes so far as to say that turning off the kind of AI he envisages as possible without its consent would be the equivalent of murder.

Perhaps the most worrying line of thought Buehrer’s paper takes is the hypothetical scenario in which sentient AIs do come into existence. He believes it is possible that different super-AI systems could come into conflict with each other and battle for supremacy. He likens this to our own long history of war and conflict preceding the development of a more universal social conscience.

At this point the paper starts to sound more like science fiction. Nonetheless, perhaps we should start praying sentient AIs are clever enough to almost immediately grasp the concepts of wider social conscience and don’t kill us all!

Leave a Comment

two × 3 =