Is Conscious AI & Smart Robots Possible?
by Dr Anurag Yadav December 10 2020, 12:00 am Estimated Reading Time: 5 mins, 30 secsEven the most advanced Artificial intelligence (AI) systems have no real understanding of what they are really doing, writes Dr. Anurag Yadav
Web Searches, Google Translate, Voice Assistants like Alexa and Siri, fraud alerts from credit-card companies, Amazon recommendations, Spotify playlists, traffic directions, weather forecasts are all taken for granted - they’re all powered by AI. But are they not just copying machines that have learned to do specific things by being trained on thousands or millions of correct answers? Do pause, and think.
The first autonomous cars had fatal accidents when their human drivers were not paying attention as they were programmed to deal with only a set of cars with pre-determined speeds and could not cope up with the complexity of continuous interactions with human drivers they faced (and if they had tested in India, they would have gone crazy with the vehicles coming headlong, or even on the wrong side of the road!).
Similarly, a facial recognition system for identifying criminals failed because the input dataset was skewed by police mug shots in which no one was smiling. Over the years, AI has become so ubiquitous that we do not even think we are using it.
‘Artificial Intelligence’, the term, was originally coined by American academic John McCarthy in 1956. He suggested that the goal of AI was to create ‘a machine that was capable of independent thought’. Hence, he felt, supervised learning would give way to self-supervised learning - not teaching a machine what a ‘cat’ looks like but to give it a mixed bunch of cats, dogs and other animals and let it sort out into groups by distinguishing features without being prompted and a step further - extrapolating the information to identify even those animals only described and not shown to it. It would be like teaching a kid by showing a picture of a horse and then a rhino, and then telling him a unicorn is something between these two, so he could mostly identify it without having seen an actual picture before.
So the machine would be programmed such that it does not erase the earlier data - also known as ‘catastrophic forgetting’, but like the brain, have the capability of ‘continual learning’ by selective activation of cells & overlap networks - and rather use the information to analyze the next dataset or ‘transfer learning’.
Moreover, efforts are underway to teach the machine by just one or two examples, and not the millions of correct examples needed earlier, which made the data computation humungous and actually limited the capability of the machine. Human beings can multi-task effortlessly - can switch efficiently between frying an egg, working in an office, playing badminton and writing music, without compromising each of these activities individually.
The UChicago researchers have developed ‘context-dependent-gating’ and ‘synaptic stabilization’, entailing activation of random-only 20 percent of a neural network for each new task, a single node may be involved in dozens of operations; thereby learning as many as 500 tasks with only a small decrease in accuracy.
The success of Deep Learning has led to the development of particular systems, of which the best examples are Alfa Zero and GPT-3. Alfa Zero created by Deep Mind, within 24hours of ‘self’ training could achieve a superhuman level of mastery of chess - Shogi and Go defeating world champion programs Stockfish, Elmo and the 3-day version of AlphaGo Zero.
Similarly, GPT-3 by Open AI uses a transfer algorithm by advancement in natural language processing, meaning it has been bombarded with billions of sentences and paragraph structures, rather than being taught nouns and adjectives. GPT-3 has learnt by analyzing these - the distinction between introduction to and conclusion of paragraphs, and other intricate details; such that it can now spew out long texts very akin to the way humans write (maybe even like Shakespeare), or compose music or even do coding.
Based on the theory put forth by Dorsa Sadigh from Stanford, robots have been trained in playing air hockey by lightweight training store opponents movements in machine-learning with one word – ‘right’, ‘left’ or ‘center’ - to use algorithms to predict where the opponent will move next and a counter reinforcement; learning the algorithm to determine how it should respond. So less, is more!
The human brain has about a hundred trillion synapses. The really big models like GPT-3 have only 175 billion! That is thousands of times smaller than the brain. Brain organoids, which are clumps of stem cells made to grow into neurons, develop connections. Some of them have electrical activity and are being used by labs worldwide to get to the genesis of ‘consciousness’. The American sci-fi romantic film, Her, has an introverted writer develop a relationship with an artificially intelligent virtual assistant personified through a female voice because of its ability to learn and adapt or be conscious!
‘Computing Machinery and Intelligence’, a seminal paper written by Alan Turing on Artificial Intelligence, published in 1950 in the Mind, was the first to introduce the concept of the Turing Test. The standard interpretation of the test involves an ‘imitation game’ played between three players (including a computer) sitting in three different rooms and unaware of the identities of each other, playing a guessing game through simple communication, wherein lies the challenge for the machine to masquerade perfectly as a human, or precisely replicate the thinking process to an extent that the opponents are fooled into believing it is human.
Actually, it is not about fooling humans but more about the machines generating human cognitive capacity. Human intelligence is considered the highest form of intelligence and we perceive only that as the true measure of an intelligent machine, even when there are different kinds of intelligence seen in Nature. Contradicting various objections to his theory ranging from religious to mathematical, arguments from consciousness to disabilities, objections from Lady Lovelace (the first computer programmer) to argument from continuity in the nervous system.
Turing envisages the machines of the future to not only be logical, but also intuitive – be kind, resourceful, beautiful, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behavior as a man, do something really new!
So true then, stands true now!
Source: Business World