Artificial Intelligence

Who is afraid of deep-learning machines?

Thomas Bosshard
  • Thomas Bosshard

Categories

  • Artificial Intelligence

The seemingly endless opportunities artificial intelligence has created in areas such as health care or finance are awe-inspiring. No news week goes by without a success story on how the latest advances in technology are tackling problems that even the best human brains would fail at. For example, AI-powered technology is predicting the stock market or assessing patients’ risk for heart attacks more accurately than ever before. Attributed to superhuman computing powers able to process unprecedented amounts of data, these advances are taking the world by storm one innovative step after another, pointing to a bright future for humanity where there are no problems that cannot be solved.

However, the awe is slowly turning into unease. On the one hand, there is an all-consuming fear that AI will become a technology that will eliminate more jobs than it creates. This looming worst-case scenario has already found its way into many government agendas. Politicians and business leaders alike are publicly discussing the introduction of a universal basic income (UBI) to compensate for the loss of income due to “mass unemployment”, along with radical changes to the education system. At the 2017 World Government Summit in Dubai Elon Musk, an advocate of UBI, drove the point home by stating, “there will be fewer and fewer jobs that a robot cannot do.”

However, not all experts share this “automation anxiety”.  Economists, for example, do not seem that fazed because they say it is nothing new. For example, in 1962 US President Kennedy described “worker displacement” as “the major challenge of the sixties when automation is replacing man”, leading to the creation of an “Office of Automation and Manpower” in the administration’s Department of Labor. This institution’s life was only short-lived, though, as the resilience of the American economy of the sixties led to masses of other job opportunities 1.

Of growing concern are so-called deep-learning algorithms. Inspired by the way neural networks work in the human brain, this newest advancement in artificial intelligence is producing machines that can learn on their own. They do so by sifting through large, otherwise unmanageable amounts of data. And the more data they go through, the smarter they become – and will soon outsmart us.

As opposed to “classic” machine-learning algorithms, which learn what to do from their creators (e.g. to look for certain patterns), deep-learning machines work this out on their own. They have their own brains so to speak. So the “bright future” that the more optimistic experts are conjuring up at the moment is that you can basically throw anything at them and they will come up with an accurate, superhuman answer.

The problem, though, is that at some point no human will be able to know how or more importantly why the machine came to that answer. In other words, the machine cannot explain itself. So why should we trust it, e.g. with medical decisions such as proposing invasive surgery, if it is unable to explain why its decision is right? This phenomenon is known as the black-box problem. The black box is basically the knowledge the machine generates and feeds back unto itself rather than to real people. As a result, deep-learning machines are going to quickly surpass us and spiral out of our control. Deep learning has even been hailed as the “the last invention human beings will ever create”.

Therefore, some sort of knowledge transfer from machine to human beings needs to be feasible if we do not want this to happen, e.g. by being able to ask the machine why a patient needs invasive surgery. But how are we going to understand their explanations if these artificial brains surpass anything a human brain is able to process? This would be like explaining to an overweight dog that it is only allowed one instead of three meals a day.

Moreover, generating and understanding human language is still a rather difficult feat for a machine –despite recent success stories such as the deep-learning translation tool DeepL. Interestingly, Bob and Alice, two bots developed by Facebook’s Artificial Intelligence Research team, were shut down after they trained themselves to talk to each other in their own language that was impossible for outsiders to understand. As one of the researchers told FastCo, “there are no bilingual speakers of AI and human languages.” Thus AI-to-AI conversations would only make matters worse. If they had built in a reward to stick to the English language, this may not have happened, they reasoned. This shows that we still need learn ourselves what we should be feeding to deep-learning machines.

Get in contact – our office in Zurich

Adviscent AG

Binzstrasse 23
8045 Zurich
Switzerland

+41 43 344 91 89
company@adviscent.com

Get
Pre-
Study

Schedule a chat

Contact us to discuss and plan a proof-of-value.