Share
Understanding AI: A Simple Explanation of AI, Machine Learning and Deep Learning

Understanding AI: A Simple Explanation of AI, Machine Learning and Deep Learning

Almost every piece of media content covering the latest technology in the world mentions AI at some point. Is there any piece of tech that includes coding not described as AI, Machine Learning or Deep Learning these days? If there is, it’s an exception to the rule. So when did software all become AI? What are the specific qualities of software that defines it as AI as opposed to just plain, old coding? And what is the difference between AI, Deep Learning and Machine learning, with all 3 often seemingly used interchangeably?

Here we’ll explain simply what the three distinct terms mean and what differentiates them.

AI, Machine Learning and Deep Learning: Concentric Circles

Firstly, it is important to note that while it is over the past few years the three terms have become buzzwords, with AI in particular becoming an umbrella term for software, the concept has been around for much longer.

Artificial Intelligence as a term was first coined as far back as the 1950s. 1956 to be precise during the annual Dartmouth Conference, a bilateral dialogue that brought together prominent U.S. and Soviet IT experts. That year the event was entitled

‘The Dartmouth Summer Research Project on Artificial Intelligence’

, the first time the term was prominently used.

At the time the first commercially available computers were coming to market. The dialogue between these first AI pioneers centred around the potential for these to be used to create machines that exhibited characteristics of human intelligence. That is all AI really means. Machines that have been created to replicate human intelligence. Initially the concept was very broad and the IT experts in attendance at Dartmouth envisioned the kind of AI popularised by Hollywood – robots that had our senses and physical capabilities, or even embellished versions of these senses such as x-ray vision or super-human strength. C-3PO, R2-D2, Johnny 5 and even the less amiable Terminator can all be considered as examples of this kind of ‘general’ humanoid concept of AI.

However, this kind of ‘general AI’, has remained within the fictional confines of science fiction, even now 60+ years later. But narrower applications of AI have developed and over the last decade the pace of that development has accelerated as more powerful, cheaper processors such as GPO chips have emerged.

‘Narrow AI’, encompasses any kind of software that is able to automate the completion of tasks previously done by humans to at least the same standard and usually better. That includes what might be considered relatively simple AI such as calculators or excel-type programs that automate arithmetic or more advanced mathematical equations or algorithms used to sort information based on its content, such as the bots used by Google’s search engine rankings that can pick out the presence of particular words or phrases.

More recently, the developing sophistication of narrow AI applications has led to software that can perform more sophisticated classifications of content. An example of this would be the kind of facial recognition software Facebook uses to suggest a particular friend is tagged in a photograph.

This deeper level of Artificial Intelligence, the kind of computer vision that can recognise individuals or categories of objects and not just words or numbers, is a move into the deeper AI concentric circle of Machine Learning.

Machine Learning Simply Explained

Artificial Intelligence (AI), in its broadest sense, is computer coding that means a machine can automate tasks that would normally require human intelligence, such as accurately calculating 5+7+12-20%. It involves the setting of a particular set of instructions or rules that the algorithm will unerringly stick to. If A, then B unless C, for example.

Machine Learning takes things a significant step further. Machine Learning algorithms don’t just follow rules set but parses (to analyse natural or computer languages or data structures, conforming to the rules of a formal grammar) data. It ‘learns’ from this data and can then make a determination or prediction about something in the world.

A simple example of the distinction would be AI processing temperature data and spitting out a median average for the year. Machine Learning, on the other hand, would be able to analyse temperature and other meteorological data as well as data on when it has snowed. It would ‘learn’ that the coincidence of particular temperature levels and other conditions, such as air humidity in a range between x and y occur at the same time as snow. The Machine Learning algorithm would, then, be able to predict snow.

So, if the algorithm was programed with ‘temperature within range of x to y, air humidity within range of y to z and x, y and z other factors with ranges x to y = x% probability of snow, that would be considered bog-standard ‘narrow AI’. The algorithm itself analysing the data sets and spotting the pattern linking the coincidence of the other data within certain ranges with the coincidence of snow and then being able to accurately predict it in future is Machine Learning. The algorithm ‘learned’ from its data parsing and made the final step itself based on the data available to it without being given the link.

Machine Learning algorithms employ approaches such as decision tree learning, reinforcement learning, clustering, inductive logic and Baynesian networks to name some of the most popular. Primitive forms of computer vision were one of the outputs of Machine Learning. The kind of coded representations of human neural processes detailed were complemented with hand coded ‘classifiers’ such as edge detection that allowed algorithms to judge where an object started and stopped.

But while Machine Learning was a significant advance on the more superficial rules-based AI, it was still prone to error in anything other than perfect conditions. So a STOP road sign might be distinguishable by a machine learning algorithm but the chances of it being missed in heavy rain or if partly obstructed by a branch was high.

Machine Learning can be a powerful tool for the completion of tasks to a higher level than the human brain can achieve in sterile, rules-based environments, such as a board game. The perfect example is how Alphabet’s DeepMind AI unit created the AlphaGo algorithm, which easily defeated the world’s best human ‘Go’ masters.

However, a Machine Learning algorithm cannot deal well with dynamic ‘real life’ situations that involve an evolving web of factors that change the rules at a moment’s notice. An example would be a more complex computer game such as ‘Call of Duty’ that more closely resembles a reality of competing interests and conditions. Or, driving a car. Machine Learning can’t cope with fallen branches, for or the unpredictable range of human error.

Deep Learning Simply Explained

That requires the deepest of the concentric circles of AI – Deep Learning. Deep Learning AI is still very much a work in progress but the pace at which it is developing is picking up. It is Deep Learning algorithms that mean we are now on the cusp of driverless cars and machines that can bear the best human players, and even teams, in complex eSports games such as League of Legends.

Deep Learning algorithms use neural networks, that are based on our growing understanding of how human biological brains work. Our brains’ functions are based on the hugely complex interconnections between neurons. Deep Learning neural networks are not quite as intricate but do have layers and interconnection.

When we see a STOP sign obscured by a branch, our brain is able to decipher what it is looking at despite the image’s non-conformity to the standard. This is achieved by a lightning fast, unconscious process of breaking down the image into elements. Our brain takes the image and breaks it down into shape, colour, size, details such as letters and lack of motion. Though the STOP sign, partly obscured by a branch, doesn’t meet the exact usual standard, our neurons assign a weighting to their analysis of the components. The result of this ‘weighting’ is a ‘probability vector’ or educated guess. 86% of what is being seen fits the standard of a ‘STOP’ sign and 14% resembles a part of a branch – it’s a STOP sign partially obscured by a branch.

While the ‘neural network’ approach to Deep Learning AI also isn’t particularly new in theory, it wasn’t until the advent of the latest technology in the world of processors – GPUs – that Deep Learning became practically possible to realise. GPU processors meant the information, or data, fed into an algorithm could be processed quickly enough to lead to conclusions, and actions based on those conclusions, practical in a dynamic real world environment.

The parallel development of cloud computing and big data also meant Deep Learning algorithms could be fed with the huge volumes of data needed to be ‘trained’ effectively. Only by seeing millions of images have Deep Learning neural networks been able to ‘train’ or ‘fine tune’ out mistakes. That’s the ‘deep’ in Deep Learning.

There is still a long way to go but now Deep Learning algorithms using GPU processors and fed with big data can, in some situations, achieve more accurate image recognition than humans. And that doesn’t only work when it comes to Facebook spotting your friend Dave in your photographs. Deep Learning is also being used to identify complex matrixes of cancer indicators. It also means driverless cars that are far safer than human driven equivalents will be on our roads within a few short years.

Deep Learning is still ‘narrow AI’ and can be highly effective when trained for particular tasks, rather than the ‘broad AI’ of a humanoid robot, which is still science fiction. But from better recommendations of films, books, clothes or even hair styles based on data around our tastes and preferences, to healthcare and the more efficient use of power, the age of Deep Learning AI is, to an extent, already here. And at the pace developments have moved since the advent of cloud big data storage and GPU processors, the range of possibilities as to what it might achieve are, while not yet unlimited, very, very exciting.

Leave a Comment

3 + fifteen =