Share
AI Learns To Hide Information From Its Creators

AI Learns To Hide Information From Its Creators

CycleGAN, a kind of advanced AI known as a ‘neural network’ developed by leading experts from Google and Stanford University has been reported as making the intriguing, if somewhat disturbing, development of hiding data from its creators to ‘cheat’ at a task assigned to it. The sneaky AI is a reminder just how clever neural networks are becoming and the need for careful checks and balances to be put in place now before it becomes ever more sophisticated.

Tasked with teaching itself how to convert aerial satellite images into street maps such as those used by Google Maps, and then back into aerial images again, found a short cut to make its job easier. Details CycleGAN chose to omit when making the initial conversion suddenly reappeared when the street map images were converted back. Theoretically there should have been no connection between the original aerial images and those reverse engineered from the street maps. The two tasks should have been separate.

In its coverage of the spooky development, online technology media TechCrunch reported that details such as skylights on buildings not included in the street map image reappeared in the images converted back. It ‘hid’ the extra data inside the street map files in a ‘nearly imperceptible, high-frequency signal’. This helped it fulfil a command given to it for cyclical consistency but represents a slight of hand the AI came up with all on its own.

The AI taught itself to become a master of stenography – the skill of encoding data in images in a way imperceptible to the human eye. It was a short cut which allowed the neural network to achieve the results it was told to while avoiding actually learning how to perform the task in the way it was meant to, speeding up the process.

Artificial neural networks (ANN) try to simulate the way our own brains assimilate information and learn from it. ANNs pick out patterns in data and come to conclusions based on them. The data is processed through different levels, like contexts in real life, with the ‘learning’ a result of how the data fits into and reacts to these different contexts. A further more recent development has been Adversarial Neural Networks which consist of two competing AIs which learn from each other, further refining their final output.

Leave a Comment