Share
Is Machine Common Sense the New Machine Learning?

Is Machine Common Sense the New Machine Learning?

The branch of Artificial Intelligence (AI) known as Machine Learning and its Deep Learning (DL) technology subset are something we cover with regularity here. And for good reason. ML and DL, the difference between the two is in the case of the latter the algorithm makes its own judgement on whether the conclusion it arrived it was correct or not, are the focus of much of the R&D going into the latest technology in the world. However, there is a growing school of thought that the best minds in computer science are in danger of becoming blinkered in the amount of faith they are placing in ML.

Is the current fixation on ML in danger of morphing into a myopia that will hinder the long term development of AI’s contribution to technological advancement? Some computer scientists believe that the data-reliance inherent to how Machine Learning and Deep Learning works means that truly ‘deep’, human-esque Artificial Intelligence will only become a reality when new layers are added. Those additional AI layers, it is argued, are developments in machine reasoning and ‘common sense’.

How Machine Learning Works, Its Strengths and Its Inherent Limitations

ML and DL algorithms are fed huge batches of related data. This data could be medical records and vital signs stats in the healthcare sector, maps and vehicle behaviour in the autonomous vehicles sector or human faces for an algorithm for a social media or surveillance technology company. As more data is added the algorithm becomes more accurate in its pattern matching capabilities.

This kind of machine learning can be a powerful tool. Its potential has already been demonstrated in ongoing projects around healthcare and health analysis, autonomous vehicles reaching the verge of becoming a reality on our roads and Chinese police recently arresting a wanted man at a concert after security cameras matched his face to a database.

Where Machine Learning is limited is the pre-condition that the task is well defined. The ‘intelligence’ aspect, it is feared, is far more superficial than much of the hype around the data science suggests. Even the more sophisticated ML and DL-based algorithms are easy to fool. A New York Times article highlights that lauded image recognition algos can be thrown completely off the scent by scrambling a few pixels. Suddenly it will mistake a rabbit for a rifle. Sure, the next time it encounters a similar bit of subterfuge the algorithm will be better prepared and decipher the red herring. But mix things up a little again in a slightly different way and another wrong identification will result. For tasks where there is a huge variety of possible exceptions to the rules, this is an undeniable problem.

This could well result in an impending brick wall. New York University’s Gary Marcus explained in a paper earlier this year. He argues that Deep Learning cannot be considered a general solution to artificial intelligence for a number of reasons. The first is its lack of effectiveness in dealing with tasks and problems around which data is limited. The second is that patterns extracted are more superficial than the first appear and the accuracy of results drops precipitously when data is impure or corrupted. The third is natural language processing limitations that result from deep-learning based language models representing sentences as sequences of words.

Linguist Noam Chomsky argues that language in fact has a hierarchical structure. This means the current neural network structures will struggle to structure unfamiliar sentences. A 2017 study by Brenden Lake and Marco Baroni found this to be the case with RNNs ‘failing spectacularly’. They use proxies that ultimately prove inadequate to try and compensate the limitations of their inherent ‘flat’ process. This weakness is expected to translate into other hierarchical tasks ML AI is being applied, such as planning and motor control.

The fifth issue highlighted by Marcus is again demonstrated most obviously in natural language processing but has further reaching consequences for ML’s application elsewhere. ML struggles with ‘open-ended inferences’. Sometimes text contains an inference that requires combining information held in several sentences and in a subtle way. It may also require reference to background information provided earlier in the text or on the presumption the reader is aware of it. ML has so far been unable to extract such inferences with any kind of consistent accuracy.

The sixth is the fact there is often a lack of transparency around how an ML-based AI reached a decision. This will make it difficult to integrate larger systems of separate algorithms and locate any apparent bugs. The seventh limitation is that ML algorithms work well when provided with neatly packaged data. This works well for tasks where lots of labelled examples have been provided. It doesn’t work well for open-ended problems such as ‘how do I fix a bicycle that has a rope caught in its spokes’? Without direct training no ML-based AI can answer the apparently simple question ‘can I make a salad out of a polyester shirt?’ This demonstrates a problematic lack of cognitive flexibility.

Being able to understand that there is sometimes a difference between causation and correlation is a further issue. Spotting complex correlation is a strength of ML algorithms but it is so far unable to represent causality. A ML algorithm will quickly pick out that older children, who are taller have better vocabularies than younger, smaller children. However, without further data, the AI will have no way to exclude causation and rule out growing leading to learning new words or learning new words causing a child to grow.

Machine learning’s approach also doesn’t work in areas where the rules are constantly changing. It will be well suited to mastering a game like chess that has a stable set of rules. Politics or economics, however, where the rules are in constant flux, not so much.

All of this doesn’t mean the latest technology in the world of Machine Learning and Deep Learning is not hugely impressive and useful. As Marcus concludes, it is an excellent solution to the optimisation of complex systems for ‘representing a mapping between inputs and outputs, given a sufficiently large data set’.

What it does mean is that ML and DL will not, alone, lead to the creation of a ‘general’ artificial intelligence. Realising the full potential of AI means the computer science community not being sucked into the trap over over-funding and over-focus on ML at the expense of other forms of AI. The danger of this would be disappointment and discouragement when progress starts to hit brick walls inevitable due to the limitations of ML outlined. That could see a sharp drop in funding and enthusiasm that could set the field back significantly – potentially decades.

Is it Possible to Create with ‘Common Sense’?

Most of the weaknesses inherent in ML’s mathematics and statistics-based approach to AI can be boiled down to the layman’s term ‘common sense’. The AlphaGo AI developed by Alphabet’s Deep Mind AI unit easily beat the number one human player in 2016. But it didn’t know that ‘Go’ was a board game. No AI can reliably answer simple questions such as ‘how can you tell if a milk carton is full’ or ‘if I put this pen in that bag will it still be there tomorrow?’ The problem with human common sense is that it is easy to recognise but very difficult to define. The nebulous nature of common sense has meant that many computer scientists have tried and failed to model it. But is there hope that we are making progress when it comes to AI common sense and will ultimately succeed in imbuing machines with it?

While most AI funding and research is going into ML technology, there are also a number of significant projects seeking to tackle the ‘common sense’ and ‘intuition’ problem. One is Microsoft co-founder Paul Allen’s ‘Project Alexandria’. Darpa, the US defence research group that put significant early funding into the internet and autonomous vehicles is also throwing resources behind modelling machine common sense.

Project Alexandria is part of the ‘Allen Institute for AI’ funded by Allen. He is doubling the budget allocated to the non-profit research lab for the next 3 years with the injection of an additional $125 million. Project Alexandria will be a major recipient and has been tasked with going after the ‘Holy Grail’ of AI – common sense.
The project will seek to combine ‘crowd sourced human knowledge’ with Machine Learning and Machine Vision. However, it is not yet clear what approaches will be taken in the pursuit of software-based common sense and if any of them will ultimately prove fruitful.

Joshua Tenenbaum of the Massachusetts Institute of Technology (MIT) believes that machine common sense will involve giving algorithms “a basic grasp of physics, as well as an intuitive sense of psychology that enables very young infants to understand that other agents in the world have their own goals”.

However, this will involve inventing new programming techniques beyond those currently used in Machine Learning as a first step. Computers need to be given the ‘building blocks’ for learning that 1-year olds have. However, as countless scientists have discovered, that is easier said than done. If it is to be achieved, ‘common sense’, AI will need to generate the same buzz, and subsequent investment, that Machine Learning has.

If the ‘Holy Grail’ of AI common sense can be achieved, things will really start getting interesting.

Leave a Comment

nineteen − eleven =