Mind and Machine: Making Sense of Artificial Intelligence
Published Oct 1, 2012 8:00 AM
Smartphones that talk. Self-driving cars. Robots that think. All fruits of artificial intelligence research that are here now or just around the corner. UCLA scientists and engineers are in the vanguard of the study of how to build “thinking” machines—and their present and possible future impact on human society.
Admittedly, it would have been the easy way out, but still I had to ask. So I held down the home button on my iPhone, waited for the words “What can I help you with?” to appear, then posed the question.
“Will you write an article on artificial intelligence for me?”
If anyone should have known enough about AI to pound out a 2,000-word piece, it was Siri. Yet her response was completely unsatisfying.
“I’ve never really thought about it.”
Maybe artificial intelligence really is an oxymoron.
Or maybe not. In the same week, a Google search pointed me toward several reports by publications as prestigious as Forbes of the growing use of articles written not by people, but by computers, unbeknownst to the average reader. (For the record, this one was not.)
It turns out that artificial intelligence plays a bigger part in our lives than many of us realize. The Google search that found those articles was powered by AI. The navigation systems now nearly ubiquitous in cars and on smartphones resulted from an early AI application. Credit-card authorization systems employ AI to search for unusual patterns of activity that might indicate fraud. In Afghanistan, robots deployed by U.S. forces number in the thousands. Then there are the more direct encounters, like the customer service “representatives” who do all they can to answer your questions without putting you through to a live person.
AI is based on the notion that knowledge and thought can be represented and manipulated through computer algorithms so as to build a thinking machine. “When you see a phenomenon exhibited by humans, it must be that computers are able to simulate it,” says Judea Pearl, UCLA professor of computer science. “We may not be able to get machines to do everything humans can do, but along the way we will make a lot of progress and learn so much about ourselves that it will be beneficial even if we don’t.” “Five years ago, AI was present but not visible to the average person; now it’s become more obvious in everyday use, and the iPhone is a great example,” adds Dario Nardi, a fellow at UCLA’s Center for Governance who until recently taught an undergraduate honors course in Westwood on artificial intelligence featuring Truman, a “socially adept” robot he created that lives online and has a human face, expressions and gestures. “The funny thing is, in a few years we’ll take that for granted, too, and consider it invisible.”
Machines That Understand Maybe
Most artificial intelligence experts now take it as a given that AI will play a much greater role in our lives in the future, with predictions of increasingly sophisticated, autonomous robots that will reduce life’s drudgery and interact with us in a more user-friendly way. And it is UCLA’s Pearl, as much as anyone, who has transformed the field through his work in the last three decades.
In March, Pearl received the prestigious 2011 A.M. Turing Award, the computer science field’s equivalent of the Nobel Prize, given by the Association for Computing Machinery (ACM). The award is named for British mathematician Alan M. Turing, who produced the first substantive writing on artificial intelligence after World War II.
“Before Pearl, most AI systems understood true or false, but had a hard time with ‘maybe,’ ” Alfred Spector, vice president of research and special initiatives at Google, noted in an ACM press release. “That meant that early AI systems tended to have more success in domains where things are black and white, like chess.”
But Pearl developed a method for delivering “maybe,” or in scientific terms, probabilistic and causal reasoning, using what he coined a “Bayesian network.” The idea was to mimic the neural activities of the human brain – breaking up impossibly large numbers of variables into smaller chunks of interrelated ones. Without the need for supervision, these networks constantly exchange messages in response to new information, using statistical and probabilistic inferences. Pearl’s work laid the foundation for computers that reason about actions and observations while assessing cause-effect relationships. The concept has found its way into a remarkable range of applications – medical diagnosis and gene mapping, credit-card fraud detection, homeland security, speech recognition systems and Google searches, to name a few. Pearl also used Bayesian networks to advance a new way of understanding and measuring causality in wide-ranging scientific disciplines such as psychology, economics, epidemiology and social sciences.
"Can Machines Think?"
The integral role of UCLA faculty in the development of AI can be traced to the field’s roots. Turing himself had been a student of Alonzo Church, a mathematician and philosopher who served on the UCLA faculty from 1967 to 1990. The Church-Turing thesis – any function that can be sufficiently described as an algorithm can be performed by a machine – is the intellectual heart of AI.
In his seminal 1950 paper “Computing Machinery and Intelligence,” which opened with the question, “Can machines think?,” Turing proposed what became known as the “Turing Test” – a computer could be considered intelligent if a human couldn’t distinguish whether he was interacting with another human or a machine.
In 1972, two years before he joined the UCLA faculty, psychiatrist Kenneth Colby developed PARRY, a computer program that mimicked a paranoid schizophrenic in typed conversation, for use as a psychiatry training tool. PARRY was the first machine to pass the Turing Test. Some two decades later, Charles Taylor, professor of ecology and evolutionary biology and a member of the UCLA faculty since 1980, was part of a group that was instrumental in developing machines with life-like properties, including the ability to learn and evolve.
The dawn of AI as a scholarly pursuit is dated to a conference on the campus of Dartmouth College in the summer of 1956, where many predicted that a machine as intelligent as a human would exist within a generation. That turned out to be wildly optimistic.
“By the time I was starting, the incredible difficulty of these problems was recognized,” Taylor says. But guided by Pearl and other luminaries – and riding the wave of exponential advances in computing power – AI is making rapid strides in several of the sub-fields that have developed in the ensuing decades.
Bats, Blood and Self-Driving Cars
One of the classic AI challenges, addressed by Pearl’s work as well as that of Richard Korf, UCLA professor of computer science, is what’s known as heuristics, or combinatorial optimization – finding efficient algorithms for problems so large that an exhaustive search isn’t possible. In programming a computer to play chess, for example, the number of possible moves, potential responses to those moves, moves the computer can make in return, etc., grows exponentially. How to develop the most efficient methods for such tasks is a central concern in AI.
Natural language processing presents another major AI problem, as anyone who has conversed with Siri can attest. “How to get a machine to really understand the meaning of a word, a sentence, a joke or an editorial the way people do is a huge challenge,” says Michael Dyer, UCLA professor of computer science and a leader in the language-processing field. A person hearing a statement like “John picked up a bat and hit Bill; there was blood everywhere” understands context and relationships enough to automatically conclude that John hit Bill with a baseball bat and that the blood belonged to Bill, although none of that is explicitly stated. “The more intelligent we are, the less we have to say to each other,” Dyer explains.
A third key AI problem involves vision – getting machines to recognize and understand images. “Although we understand quite a bit of the anatomy and physiology of the brain, we really do not know how information is represented, stored and manipulated by the brain,” says Stefano Soatto, UCLA professor of computer science and a computer vision expert. “And yet, one can infer enough to solve a number of tasks.”
Among the most anticipated application of computer vision are self-driving cars. The first vision-guided vehicles were successfully demonstrated in Munich, Germany more than 20 years ago. Soatto’s group built a self-driving car for the U.S. Department of Defense’s DARPA Grand Challenge in 2005, and one of the group’s company partners has since built systems for vision-based driver assistance and autonomous driving. Self-driving vehicles have been available for sale in Japan since 2006, and aren’t far from hitting the market in the United States, where Google’s driverless fleet has tallied more than 200,000 miles, including navigation on public roads.
Two Robots Walk Into a Bar…
Some in the AI field believe it’s not a matter of if but when we will put all of the problems together and create artificially intelligent machines that are smarter than humans – a concept that’s been called “The Singularity.” It won’t be easy, and for that reason some suggest projections that we are as few as 20 years away also are overly optimistic. “Just because the chip speed is going to be fast enough to duplicate a human brain in terms of performance doesn’t mean it will have anything smart in there,” notes Nardi.
Indeed, much depends on how one defines intelligence. Computers are already better than humans at many things, whether it’s storing huge databases (you try memorizing thousands of phone numbers) or playing chess. On the other hand, notes Pearl, it’s hard to imagine machines outstripping comedians when it comes to something like writing jokes. “Humor takes a deep knowledge of oneself, the listener, the society and the context in which we are living,” he says.
For all that ultrafast computers can do, humans have an advantage of their own. “Our computational power derives from these 100 billion building blocks in the brain: neurons,” says Dean Buonomano, a UCLA neuroscientist with an interest in using AI models to gain insights into how the brain operates. Each neuron is “talking” to thousands of other neurons, and it is in these connections that our knowledge is stored. By contrast, Buonomano describes the computer’s transistors as “introverted computational devices that talk to only a few of their neighbors.”
Dyer says it was the ability of IBM’s pop-culture celebrity AI Watson to pick up on the relationship of concepts in “Jeopardy!” clues that enabled the AI system to defeat the popular game show’s top players in a head-to-head-to-machine matchup. “If you can find the intersection of two features of a ‘Jeopardy!’ question, you have a very good chance of getting the answer,” Dyer explains, adding that while Watson can’t answer questions about something unstated, “it can do intersection search on a lot of connected text.”
How Smart Is Too Smart?
AI is moving toward a “humanoid” robot – one with vision and the ability to plan, reason and understand emotion; one with common sense and the ability to learn; one able to balance itself and move about. Already, whether it’s autonomous vehicles or the robots being built to provide home care for the elderly, artificially intelligent machines are increasingly acting on their own. “For robots to be really useful they have to learn and evolve, and these are likely to escape human control,” observes Taylor.
“The programmer can’t predict how these systems will end up behaving after certain sequences of inputs; it‘s just too complex,” adds Colin Allen Ph.D.’89, co-author of “Moral Machines: Teaching Robots Right from Wrong” and a professor in the Department of History and Philosophy of Science at Indiana University. “And furthermore, the machine is reconfiguring itself as it goes, so you can get not just unexpected and unpredictable consequences, but configurations that weren’t explicitly intended by the programmer.”
For Allen, the emergence of AI systems that can adapt and learn raises the question of whether they can be designed to do the right thing in circumstances that call for ethical judgments. When a machine’s job is to ensure that an elderly patient’s pills are taken and the patient is saying no, can the machine insist up to a certain point but back off when the patient’s autonomy is the more important value? Suppose an autonomous driving car system has the capacity to detect that a passenger in the car is gravely ill, or a woman is in labor. Can it be programmed to correctly weigh the risks of speeding to the hospital against not getting the passenger there soon enough?
A society in which robots run things sounds like the stuff of science fiction. But is it?
“It’s plausible that we’ll never have machines that are as intelligent as we are, but it’s also plausible that we’ll one day have machines that are much more intelligent than we are,” says Korf. “If we lived in such a world, what sort of use would we be? Would the machines keep us around as pets? This is pretty far afield, but not completely inconceivable.”
Despite these concerns, Allen is optimistic about a future in which artificial intelligence plays a pivotal role. “All technologies have unintended consequences,” he says. “Nobody thought to predict the amount of pollution that would result from automobiles or the percentage of our urban environment that would be built around them, but they also drive all kinds of positive economic and political outcomes. There have always been those who said various technologies would represent the end of everything good, but they’ve been proven false over and over again.”
- The Great Pumpkin: A UCLA History of Halloween
- Not Your Father's M.B.A.
- Patient, Test Thyself
- Pride & Glory
- Splendor in the Trash