From Perceptron to Google: Every time we thought an artificial intelligence was sentient

“This machine will be the first device to think like the human brain” representing “the embryo of an electronic computer will be able to speak, walk, see, write, reproduce and pay attention of its existence “. Moreover, the most advanced models will be able to” recognize people, call them by name and instantly translate from one language to another “.

These lines are taken from an article in the New York Times with the title The electronic brain that learns itself, where many of the expectations that still surround artificial intelligences are reported today. However, the article in question is over 60 years old: was released on July 13, 1958 and plays Mark I Perceptron, a groundbreaking artificial intelligence system that was unveiled to the public just that day. Despite the extraordinary expectations, the Perceptron during his first appearance was only able to distinguish right from left: “But it is believed that it will be finished within a yearat a cost of $ 100,000, ”the American newspaper always reported with confidence.

Interview

The future seen by Annalisa Barla, the teacher who teaches machine learning

by Emanuele Capone



The Perceptron was funded by the U.S. Navy and created by Frank Rosenblatt, a psychologist at Cornell University, a giant machine, thickly covered with cables and composed of motors and knobs connected to 400 light detectors. An instrumentation that made it possible to simulate the behavior of 8 neurons, again able to learn using a method called supervised learning.

Although it was a rudimentary system, with only two layers of neurons, one for input (the data) and the other for output (the results)was the basic function of Perceptron the same as in today’s deep learning (whose neural networks have billions of nodes): a training system based on trial and error, where the connections that produced the correct result are strengthened, and those who have done so . incorrectly weakened.

There path followed by Cornell (inspired by McCulloch and Pitt’s theories, the first to create the computational model for a neuron) was correct, but what we have today in large quantities was lacking: computing power and huge amounts of data. It was above all this deficiency that caused errors in the very first neural networkswhich instead of reaching self-awareness was quickly archived.

In 1966, it was Marvin Minsky, a pioneer in the so-called symbolic artificial intelligence (instead of learning independently of the data, it follows the instructions and rules of the programmers), to bring to life another experiment surrounded by high expectations. In his lab at MIT in Boston, he tried to connect a video camera to a computer to let it watch. Reading a basic essay by Alan Turing (Computer machines and intelligence)Minsky was convinced that in order to equip the machine with a real brain, it was first necessary to give it the sight, hearing, smell and other senses necessary to experience the world around it.

The experiment failed, and Minsky soon had to give up: it was not possible to copy the human eye by connecting a camera to a computer. However, according to the scientist, in a short time it would be possible to give life to a truly intelligent AI. IN an interview with Life Magazine in 1970stated that “in 3-8 years we will have a machine with the same general intelligence as the average human being. A machine that can read Shakespeare, polish a car, deal with corporate policy, tell a joke and fight. On the time the machine will start training itself at an incredible speed. In a few months he will reach the level of genius, and a few months later his powers will be unmanageable ”.

More than fifty years have passed since then and none of this has happened. Around 2012, the increased computing power and amount of available data has been revived neural networks (already at the base of Perceptron) and was allowed to set in motion the deep learning revolution, thanks to which extraordinary successes have been achieved in the field of artificial intelligence. Deep learning (now used for image recognition, e.g. translationsto predict what we would like to see on Netflix) quickly revived the hope that the creation of conscious AI could be just around the corner.

One of the most exciting examples is certainly that of GPT-3, the neural network specializing in language processing naturally presented in 2021 by OpenAI, the research firm on artificial intelligence owned by Microsoft. If the 1950s Perceptron possessed 400 light detectors and was able to simulate eight neurons, the GPT-3 would instead be equipped with 175 billion parameters and used 450 GB of training information, which also contained the entire English-language Wikipedia.

This amount of data and computing power has enabled the GPT-3 to achieve exceptional results, including the development of a long editorial published in The Guardianin which he flawlessly formulates the reasons why people should not be afraid of the most powerful deep learning algorithm ever seen at work: “Why, you may ask, would people deliberately put themselves in danger? Is man not the most advanced being on the planet? Why should they think that something inferior, objectively, could destroy them? ”To these questions, GPT-3 then responded promptly with reference Matrixanalyze the consequences of the industrial revolution, discuss the etymology of robots (“forced to work”) and much more.

It is still possible to define unconsciously a software able to come up with such subtle arguments? Actually yes. Not only because GPT-3 took advantage of the collaboration with a human editor, who made a careful cutting and sewing of the dozens of versions produced by the machine to create a better one, but above all because (as explained Gary Marcus, a neuroscientist at New York University) systems like GPT-3 “do not learn what is happening in the world, but learn the way people use words in relation to other words”. In the end, their work is a kind of colossal statistical copy-paste, where the machine predicts which sentences in its huge database have more chances to be more or less coherent with previous sentences, without, however, having any understanding of what is actually being said.

Analyzes

Conscious deep learning is still a utopia

by Andrea Daniele Signorelli



Finally, then that was the case with LaMDA, the Google system which convinced his programmer, Blake Lemoin, to be sensuous, which often gave him very coherent answers to the questions. On closer inspection, however, it is noteworthy that LaMDA only has mimicked human behavioridentify the answers that would most likely be convincing.

As he reported scientist Aenn Matya on Twitter, to the question “what makes you happy?”, addressed by Lemoin, LaMDA answered for example: “Spend time with friends and family”. Since it is a software that is openly devoid of friends and family, this answer shows how it only mimics human behavior, without self-awareness. LaMDA has therefore learned to statistically sew together the billions of data received, thus reproducing human voice.

There are also many other famous cases where we have confused a more or less credible rendering with conscience and intelligence: Eliza is the bottom created in 1966 by Joseph Weizenbaum to mimic the behavior of a psychologist, and which was considered credible by several patients (you can try it here). Eugene Goostman is instead the bot, who in 2012 pretended to be a Ukrainian boy who spoke English, managed to convince 30% of the judges in a competition for be a real person and thus pass (albeit with a few tricks) the Turing test.

From the 1950s to the present, the cases in which we thought that intelligent machines were among us or at least were around the corner, have therefore cyclically recurred. But in each of these cases, it was we ourselves who projected consciousness and intelligence on machines designed just to emulate us in the best possible way. A strange short circuit that perhaps says more about human aspirations than the intelligence of machines.

Leave a Comment