Artificial Intelligence: Myths and Methods

entry.featuredImage.one().imageAlt

The desire to project human intellect into inanimate objects is an ancient one. From antiquity there have been examples of automatons designed to mimic human behavior and actions. It took the invention and refinement of the computer to push the effort into a seemingly attainable goal. During the 20th century, many efforts at creating systems that imitate human intellect were attempted with varying degrees of success. By the 21st century, Artificial Intelligence has moved from a theoretical possibility into a crucial engine that enables many of our daily activities. Despite this, the mechanisms and potentials of AI tend to be poorly understood.



Charles Babbage's Analytical Engine (1871) - an early attempt at AI


Many supervised techniques use Bayesian networks, which simulate intelligent action by developing extremely complicated probabilistic relationships between stimulus and response. These include neural nets that simulate brain cell function, and Markov models, which rely on user rating to determine connections. These models need to be trained, and anyone familiar with giving thumbs up or down on Pandora some experience in training a Bayesian network.



A (very) simple example of a Markov decision tree.


Unsupervised techniques do not need any outside help and include cluster analysis - which identifies similar things and groups them together. These techniques are used for object recognition, pattern recognition, and facial recognition. Other unsupervised techniques include reinforcement learning, which is similar to Markov models, but can develop complex response systems on its own in response to stimulus, without the need to be shown a 'correct' answer. This technique was a component in the AI that successfully defeated a human opponent in the game of Go in 2016. But the consequences of AI go beyond the mere functioning of its systems and include larger ideas about its impact and meaning.


In 1980, American Philosopher John Searle outlined an important distinction within AI that still resonates today. In his paper, he breaks Artificial Intelligence up into two general areas: strong and weak. Strong AI is that which is actually self aware, weak AI only simulates sentience. He goes on to refute the possibility of strong AI through a hypothetical experiment called the 'Chinese Room.' In this situation, a person who does not speak Chinese sits in a room filled with boxes of Chinese characters that she can't understand and a book of instructions, which she can understand. If a Chinese speaker were to slip cards with Chinese characters under the door, the person in the room would be able to use the book to figure out which character should be used to respond to each card. On the outside, it would appear as if the person in the room spoke fluent Chinese, when in actuality, she did not. Using this analogy, Searle projected the problem onto strong AI, suggesting that no matter how well you program a computer to exhibit intelligence, the computer itself is never actually intelligent, but only simulates awareness. This may seem like a lofty idea, but it has distinct effects on what we expect from Artificial Intelligence. In other words, if our expectations of AI include abilities that it cannot do, and then assign responsibilities based on these assumptions, it can become a recipe for disaster.


Debates about the future and meaning of AI vary widely in the public sphere. Noted physicist Stephen Hawking fears a sort of 'terminator' future in which AI has the potential to overtake the better intentions of its creators and damage or even destroy humanity. Others, like Google executive and famed inventor Ray Kurzweil, believe that artificial intelligence will overtake human abilities by 2029. Kurzweil calls this the 'technological singularity', and believes it will usher in a new age of humanity in which AI enables the transcendence of previous problems plaguing man. Less extreme opinions regarding the future of AI accept its potential benefits, but are realistic about the possible pitfalls. Tech scholar Benjamin Bratton warns not of an AI run amok, but of the dangers of misunderstanding its limitations. By his lights, it's very important to understand that the 'intelligence' of AI is not a human intelligence and to pretend that AI can act as surrogate humans makes us vulnerable to give them responsibilities that they are not designed to handle.



Competing extreme visions of the future of AI: technological singularity vs. dangerous terminator


The potential for over-dependency on Artificial Intelligence already has already been generating controversy. From the current testing of driverless cars in some cities, to the recent replacement of human workers with AI, to injustices that may result from algorithmic bias, considerations for the ethical and legal effects of AI are now an imperative. Thankfully, there are those in the computer science field that are putting forward ideas that may help mitigate these concerns. These include laws that alert users when an AI entity is in use to prevent being mistaken for human, 'ethics bots' that can analyze a self learning system faster than humans are able to make sure that AI systems obey laws and treat humans fairly. These considerations are an important first step in safeguarding and standardizing the safe and fair use of AI (at least until the singularity or terminators arrive).



Posted by Joe Cantrell | Request as a Speaker

Creative Research Specialist. Asker of technological questions, explainer of things, voider of warranties