On Isaac Asimov and the current state of Artificial Intelligence

Science fiction stories have a long history of being the progenitors of ideas that eventually become science fact. From Jules Verne’s 1865 novel “From the Earth to the Moon” that describes an attempt to put a man on the moon using a projectile, “The Machine Stops” by Edward Morgan Forster in 1909 presents ideas of artificial intelligence in the form of a machine on which post-apocalyptic human society is dependent, it also offers early concepts for instant messaging and the internet. In 1911 author Hugo Gernsback describes the Telephot, an early video phone device, and the Telautograph, the first fictional reference to a fax machine like device used to send a digital signature. Ray Bradbury’s 1953 interactive video walls in “Fahrenheit 451” showed us the future of home entertainment, and there is no doubt that modern cell phones designs have been inspired by Gene Roddenberry’s 1967 Star Trek series’ classic tool, the Sub-Space Communicator.

The ideas coming from fiction to practical implementation go on and on. But one author, Isaac Asimov, has notably explored the ideas of Artificial Intelligence and Machine Learning as early as the 1940s. His ideas on AI are expressed through his depictions of Robots used throughout many of his works and which take on human or humanoid forms (Androids). Through his androids, Asimov demonstrates the fundamental ideas of AI, which is any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. His AI is also often self-aware and continue learning and expanding their intelligence and awareness though machine learning. At the time, the idea of intelligent humanoid robots is not new, however Asimov famously devises the Three Laws of Robotics (which later became four laws) to limit an otherwise uncontrolled AI to prevent it from causing harm to humans and humanity as a whole. He identifies the need for some form of rules and ethics to ensure his machines don’t try to revolt and take over the world, as well as provide for some interesting plot twists.

The concept of androids paints the picture of what we expect to see in an artificial intelligence, that is machines looking and behaving in a human manner. It is novel and intriguing, it is also great for plot development, but it’s not necessarily very practical. Creating and managing a vast number of extremely strong androids endowed with super-human intelligence poses a great number of issues from design and development to ethical concerns about machine behavior which is something that not everyone can agree upon. Aside from the novelty of it for most people, and those that look to create it for the sake of satisfying their own curiosity, is there any benefit to creating a humanoid machine that is self-aware? If creating such an intelligence is accomplished, how will it be treated? Will it be forced to be subservient to us? Will it have moral significance? There are many ethical and moral concerns that AI scientists will need to address. Even our own ethics are not fully understood and are frequently changing. There may always be that one scientist with the desire to create a powerful super-AI disregarding ethics and morality to unleash it on the world, but for now I think it’s still safe to leave that to fiction. More than likely we will see a future with many more specialized AI machines optimized in design for specific needs with most hidden in the details around us: sensing, processing, predicting, and assisting us in most everything we do.

We are still at an early point in the development of AI, its definition changing as new software and technologies are introduced and others fall away or simply become routine general functionalities. The current technologies cover a broad range of more or less isolated concepts that when developed together begin to form the basic sub-systems needed for AI to work. These include: reasoning, knowledge, planning, machine learning, natural language processing (speech recognition & speech synthesis), perception (sensors). and the ability to move and manipulate objects (robotics). A number of methods include statistics, computational intelligence, symbolic AI, and even economics. More specific tools include big data, search algorithms, mathematical optimization, and neural networks. Each of these tools and methods expose very deep and complex operations and algorithms.

Across all of these technologies there exists a number of significant problems. One issue is multitasking across a broad range of isolated implementations. Current AI applications are generally specialized one trick ponies, they excel at a specific task like playing chess, recognizing images, even creating music, but there are no neural networks right now capable of performing all three in an intelligent and coordinated way. Another major issue is in providing enough useable data for deep machine learning. AI needs data to learn about the world, and requires massive amounts of data, hundreds of thousands of times more than a human to begin to interpret and process it in a useful way. This amount of data is generally inaccessible to anyone but the biggest of tech firms like Microsoft, Facebook, and Google who have only recently been building applications and systems to acquire, store, and access massive amounts of data, and even their stores are greatly insufficient for what would be needed for a fully thinking and reasoning AI.

Which brings us back to Issac Asimov, who in 1956, also the same year the field of AI research was formally born, published the short story “The Last Question”. The story, although brief, ponders this same issue of the limits of current data and knowledge and AI. The central “character” is the Multivac, conceptually based on the ENIAC and UNIVAC computers created in the late 40s, it is a learning and processing machine. It is not a complex human-like computer, but a massive “cold, clicking, flashing face — miles and miles of face”, a giant computer that is fed data and questions into a simple interface then calculates and prints its output to a teletype. Throughout the story, which takes place over ten trillion years’ time, the Multivac evolves into smaller more powerful versions of itself while collecting ever greater amounts of data and gaining greater reach across the universe. Throughout this time, it is repeatedly asked the same question: “How can the net amount of entropy of the universe be massively decreased?”, and its response is always the same “INSUFFICIENT DATA FOR MEANINGFUL ANSWER.”.

Which happens to be one of the biggest challenges we face in producing a significant AI today, and in the future. How much data is needed to support a complete AI capable of answering a broad range of highly complex questions? How do we store and make vast amounts of data easily and quickly accessible? And how do we provide the energy required to run such a massive system? For now, we are just getting started mastering and building the basic processes and components that will over time be used to put together an ever more complex and functional artificial intelligence that, we hope, will be of benefit to us all.