It's the hype of the moment: Artificial intelligence (IA), an apparent solution to every problem and at the same time a threat to our future, depending on how you look at it. One thing is sure: from cell phones to homes, from our way of moving in the city to police operations and even war, there is Artificial intelligence everywhere.
Should computers be able to think independently the future will be quite different from our life.
According to the scientists, after 2075 machines will be able to reach a super-human level: this means they will be able to make infinitely more complex and effective decisions than ours and even to govern themselves (ourselves?). Whatever things go, are we ready for this revolution? And is it wise to give such power to something smarter than us? Answering these questions is not easy. But, to try to do it, it is first necessary to understand how machine intelligence is measured. And how does it work.
Future machines will learn like a child does
Feng Liu, Yong Shi and Ying Liu, three researchers from the Chinese Academy of Sciences in Beijing have been asked to submit AI programs to IQ tests.
The best result was obtained by Google, with a score of 47.28. To be clear, a 6-year-old child averages an IQ of 55.5 and a 18-year-old from 97. Too little? Three years ago, Google was up to 26.5, so it's improving very quickly. This is, of course, a bit 'forced measurements, because even just defining the AI is not simple. "We talk about artificial intelligence as that discipline that studies machines capable of doing things that, if they were made by us, we would call intelligent. It is a limited definition, which takes humans as its model, but in the first instance it can work .
Surely the intelligence has to do with the ability to solve complex problems, for which there is no algorithm that leads to the solution in a set number of steps; as the recognition of objects or speech, the ability to think by analogy, to imitate from examples or to express creativity (writing a poem, painting a painting or something else) .
All these typically human activities surpass the classical concept of artificial intelligence that has been followed since, in 1950, the mathematician in glance Alan Turing wondered if machines could "think".
Turing approached the question in a practical way by proposing a very simple game, that is putting a human judge in a position to ask questions to a man and a machine. If the judge was unable to distinguish between the two possibilities, then the machine was supposed to be intelligent.
For the record, so far no software has been able to do it. But the point is that today this definition appears to be limited. In the meantime, in fact, artificial intelligence manage to surpass humans in various activities. On 11 May 1997, the sensation of the victory of Deep Blue, an IA programmed by IBM, caused a stirring against the world chess champion Garry Kasparov. In February 2011, Watson, another IA from IBM, beat human opponents at "Jeopardy", a general-culture television quiz.
Today the most amazing AI is considered Google's AlphaGo, able to beat at "Go", an ancient Chinese game of strategy, the best players in the world: the most surprising thing is that, in these challenges, the computer has developed new strategies never came to mind of humans.
AI at the University
For these reasons, scientists are proposing a variation of the Turing Test. A group from Japan's National Institute of Informatics has suggested using evidence for the students for admission to the University of Tokyo as a reference, and expects an AI to overcome them by 2021.
However, today as it is today, it is more appropriate to speak of individual specialized "intelligences" rather than one artificial intelligence. "An IA system that makes an accurate medical diagnosis cannot drive the car," explains Cesta. "Human intelligence is instead multitasking and extremely fluid. Compared to this, we are still far away ».
However the progress is still quite impressive, and this is related to the way of learning of AI, which happens more and more independently.
Learning by itself (or himself?)
Yeah, but how do the machines "learn"? We usually distinguish between "symbolic" methods, such as logical reasoning, and "subsymbolic "methods, such as artificial neural networks, which try to reproduce the behavior of the brain by imitating its basic elements, ie neurons. In fact, to get the best results, it is often necessary to integrate the two things.
Among the symbolic methods, the most important is the inductive learning, in which elements are supplied to the machine, which generates them and then used to solve cases not yet observed. It is what medical diagnosis systems do, but also those that intercept spam in electronic or facial recognition. The same applies to the voice: software that "understand" what we say they do by comparing the acoustic waves that correspond to the different ways of pronouncing the same word. Precisely on the recognition of images or speech, the best software today have a margin of error lower than that of man. The advantage of self-learning is that we no longer have to explain to the software what it is looking for: it understands it for itself. In this way, for example, in expressing a medical diagnosis, an AI can even find links that, at the state of our knowledge, we would not be able to see.
How neural networks work
This self-learning pattern applies in particular to neural networks which, thanks to their structure, are able to perform more complex tasks. A neural network is composed of individual units that simulate the behavior of a human neuron, receiving a series of incoming information and processing them in a single output response. Each of these responses can in turn be input information for another neuron and all the incoming and outgoing connections from each artificial neuron form a huge network in cascade. The most important characteristic of every artificial neuron is the ability to evaluate from time to time the "weight" of a single incoming information to process the answer. And therefore, just like a natural cell, to pass only the information needed to respond to that specific problem and close the doors to others.
The question which we might rise is: How do they learn, which data is to be accepted and those to be rejected? The answer is: building on past experience: the IA assesses its degree of reliability in the response, closing the door to information that did not serve the previous times. Thanks to this approach, known as "deep learning", various applications have been developed, for example to make financial forecasts and to control fraud. The disadvantage for man is that - even if the solution is reliable - it is not always possible to understand how the AI has arrived there, because the process is somehow "buried" in the different layers of the software structure. The answer to this problem could be the "Explainable AI" project, designed by the American military agency Darpa. The idea is that all new AIs will have to be able to produce explicable decision-making models and have to allow human beings to understand and effectively manage the next generations of AIs.
Because to trust is good, but not to trust is better.