Did you know that you can navigate the posts by swiping left and right?

The Philosophy of Artificial Intelligence:Can We Ever Make a Thinking Machine?

22 Nov 2018 . blog . Comments

Life 3.0: Being Human in the Age of Artificial Intelligence
by Max Tegmark
Penguin Random House, 335pp., $28.00

Life on earth is one of nature's grandest experiments. From a primitive single-celled organism to a species that self-proclaims to be sapient, roughly three billion years of trial-and-error resulted in many astounding biological traits. Of these novelties, a fascinating trait is intelligence: the ability to reason, comprehend, and make careful judgments. This special trait, however is restricted only to humans, which has helped tremendously for the betterment of one species, but to the detriment and endangering of many others. Curiously, human intelligence, in the recent years, is actively attempting to emulate this unique human trait by artificial means. Artificial Intelligence (AI), as an idea was seeded not too long ago, but mostly remained fodder for works of science-fiction, and at best a pipe-dream for mainstream computer science for most of the twentieth-century. However, in the recent years, cutting-edge research fueled by leading computer infrastructure giants such as IBM, Google, and many others are on a technological tour de force to make AI a reality. Whether we realize or not, AI is quietly making a stride in our everyday lives, from performing simple tasks of consumer analytics such as making recommendations on what one should watch next on Netflix to more complex tasks in health care information systems such as identifying symptoms for heart disease and cancer. While we are beginning to increasingly surround ourselves with "smart" machines, can AI ever achieve the goal of making an intelligent machine that can think?

This precisely was the topic of discussion at the 2018 Isaac Asimov memorial debate at the American Museum of Natural History, hosted by Neil DeGrasse Tyson, who is currently serving as the director of the Hayden Planetarium at the Museum. The event managed to bring in some of the best AI researchers in the US, representing both the academia and the industry. Mike Wellman of the University of Michigan, and Max Tegmark of MIT are leading academic figures in AI research. Max Tegmark is also the author of "Our Mathematical Universe" who in his first outing of popularizing science dealt with convincing how physical reality of the observable Universe has a strong underlying mathematical structure; and more recently, his second book "Life 3.0", which I briefly review here traverses through the emerging trends in robotics and AI. Industry giants were represented by Ruchir Puri, who helped IBM develop their most famed supercomputers DeepBlue and Watson, John Giannandrea of Google (who developed AlphaGo and since moved to head AI research at Apple), and Helen Greiner of iRobot corporation, which develops wide range of robotics for space exploration and military defense. Unconventionally, the event was more of a discussion than a debate as everyone is a proponent and a stalwart of AI research. The panelists contemplated AI's brief, but interesting history, provided numerous examples of the state-of-the-art AI developed in the last decade, and arguably, prophesied the future of AI research. Most of the panel discussion somehow feature in Max Tegmark's book "Life 3.0", which elegantly conveys many technical aspects of AI research for non-technical readers in a very entertaining narrative.

Before AI became a focus of mainstream scientific research, the idea of a humanoid machine surely caught the imagination of writers of fiction. The term "robot" was introduced to the English language by the Czech writer Karel Čapek in his 1920 science fiction play Rossumovi Univerzální Roboti, where in Czech language, the word robota means forced labor or a slave. While the idea of an android slave was both thought-provoking and an instant hit, the imagination did not stop there, and it slowly swept to the academic circles. To give a brief history of how AI became mainstream research, it is hard not to invoke the contributions of the English mathematician Alan Turing. The question, "Can machines think?" was considered as a thought experiment by Alan Turing in an essay in 1950. In this seminal paper, Turing rightfully acknowledged the difficulty in defining terms such as "machine" and "thinking", as there could be more than one normal use of these words. Therefore, he chose to devise this problem as a game called the "imitation game". Known as the Turing test in popular culture, imitation game is played with three people. Player A is a man, player B is a woman and player C is an interrogator of either sex. Player C is unable to see A or B, but with a series of questions and notes should be able to determine that A is a man and B is a woman. Now, the role of A in this game is to trick the interrogator to make a wrong decision, while B attempts to assist the interrogator. Turing then asked "What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" Or simply, can machines trick the interrogator as good as humans do?

More than half-a-century has passed since the proposition of the imitation game, and have we made a machine that can pass the Turing test? The field of AI witnessed a tremendous leap in the later half of the twentieth-century, with numerous breakthroughs in computer science. A key breakthrough was an attempt to bridge the human-machine language barrier. For a machine to be as good as a human in the imitation game, the machine should first understand the human language. While this may seem trivial with the recent explosion of smart digital assistants like Siri, Alexa and many others, in retrospect, this was a daunting task when the foundations for all this began in the sixties. After nearly 40 years, in 1991, the first practical test for Turing's imitation game was implemented, albeit without success. Since 1995, the annual competition for Leobner Prize is held to find a machine that can pass the Turing test, and as of to date, a winner is yet to emerge. Nonetheless, in the last fifty years, machines have gotten better in outperforming humans in both labor-intensive, and decision-intensive tasks. Max Tegmark provides ample examples, which includes IBM's DeepBlue dethroning the then World chess champion Garry Kasparov in 1996, IBM's Watson crushing its human opponents in the television quiz show Jeopardy, and more recently DeepMind's AlphaGo (now part of Google) outperforming the World champion Lee Sedol in the oriental board game "Go", where the creators of AlphaGo exulted on a particular move, saying "a highly creative move, in defiance of millennia of human intuition". The series of examples don't end there and Tegmark is on a mission to proselytize the coming-of-age of AI.

A more profound example to follow is on the greatness of Google's Brain Team. Language processing is one of the rapidly evolving fields of AI, and according to Tegmark, Google Translate has tremendously improved in the last few years. Google is now employing artificial neural networks (inspired by biological neural networks that constitute animal brains) to process language, where whole sentences can be translated at once, rather than an earlier version of finding a solution piece by piece. Google Translate now supports over 100 languages and developments like these are a key achievement on the journey to building a machine that can pass the Turing test. In addition to peppering recent achievements of AI throughout the book, Tegmark also opines on what the future holds in store for AI research. The most entertaining part of both the Isaac Asimov debate and Tegmark's book is weighing on the advantages and disadvantages of AI research. There is a general notion, largely derived from the works of science-fiction, that what if machines learn to think and someday overpower the human species. While Tegmark never considers this a disadvantage, as such an outcome is unlikely in the near future, he rather suggests that the goal of AI research should focus on "empowering" the human race and not "overpowering" us. In the recent years, companies like Google, Telsa, and others have made people aware of the attempt to use AI in transportation (self-driving cars), but there are several other fields in which AI is quietly making a huge progress on the road to empower humanity. There are at least two that piqued my interest, enlisted in Tegmark's book.

The first case where Tegmark suggests AI can make a significant contribution is the maintenance of law and order. Tegmark cites an example for a verdict where the jury failed to convict the actual culprit due to a biased opinion based on race. The legal history, and judgements can be influenced or biased by race, sex or religion and Tegmark proposes a stronger use of "Robojudges" that can be programmed to evaluate cases without any inherent bias and treat everyone fairly, equally and transparently. A key example from his book worth noting is "A controversial 2012 study of Israeli judges claimed that they delivered significantly harsher verdicts when they were hungry......they [only] denied about 35% of parole cases right after breakfast, but denied over 85% cases right before lunch". Tegmark fiercely proposes a case for "Robojudges", due to the inefficient way judicial system works in many countries. For example, in developing countries with a large population like India, there is a judicial backlog of over 33 million cases. While this number is slightly less in the United States, there is still an estimated 500,000 pending cases in 2018. The advantage of computers tasked to handle the preliminary stages of jurisprudence is that many of these tasks can be processed in parallel, which is commonplace in modern computing. For example, in the same time a human judge can hear and deliver a verdict, a computer can easily handle five or more, depending on the infrastructure. A second, more healthy advantage is that the judicial system can be corrupt in many countries, where judgments can be influenced, and such an outcome can be prevented if "Robojudges" are tasked to deliver judgment.

Another area where AI is making a steady progress is health care. In health care, computers now efficiently perform tasks at multiple levels ranging from prognosis, surgery and preventive care. Tegmark cites two recent studies that show machine learning algorithms can diagnose cancers as good as (and better sometimes) radiologists and pathologists. Again, artificial neural networks (a type of machine learning) described earlier for Google Translate are employed here, but instead of providing known sentence structure and grammar to train computers, the machines are trained with a library of known cancer cases to learn and differentiate microscope images or magnetic resonance imaging (MRI) data. In addition to diagnosis, Tegmark proposes that machines can be better at precision surgery as good as humans. One of the key requirements or rather a psychological aspect of surgical training is averting empathy towards patients as emotions can be a deterrent during the procedure. With adequate human supervision, allowing machines that lack human-like emotions to carry out surgeries can often help achieve precision. However, machines cannot replace humans in every aspect of health care as people with illness often require adequate emotional care, in addition to the treatment for their malady. Achieving this, for a machine, requires human-like subjective experience and promptly, Tegmark dwells on the more philosophical aspects of thinking, subjective experience and consciousness in the the last chapter of his book.

For a machine to have a subjective experience, first humans should understand how thinking or consciousness works. Of course, consciousness is a very controversial topic, and Tegmark righfully calls this the "C-word" that "may irk anger among biologists, psychologists and neuroscientists". An earlier version of Macmillian dictionary of psychology defined consciousness as "nothing worth reading has been discovered on it". While it may be easy to point out brain as the epicentre of thinking and consciousness, biologically, it is a very difficult problem to solve. Although computer science borrowed and adapted the term neural networks for machine learning, biologists who work on C. elegans worm that just has 302 neurons don't seem to have a complete understanding of how its nervous system works. So, with the so-called "smart" machines flooding the consumer market, I am always delighted to see the developments in AI that bridges the human-machine barrier. At the same time, I also gauge what these "smart" machines can't do, so I can feel a deep sense of appreciation for what it means to be a human and our unique ability to think. We can may be make a machine in the near future that can pass the Turing test, but emulating intelligence, like humans do, requires more work. So, can we ever make a thinking machine? To answer this question, first we need to understand how human thinking works and this is quite a fascinating area of research in neuroscience. Although a very optimistic person myself, I am unsure if this feat can be achieved during my life time. I eagerly await a day when machines are apt to do that, and until that day arrives, I can poignantly say that artificial intelligence is no match to natural human stupidity.