Artificial Intelligence Pioneer: We Can Build Robots With Morals: Artificial intelligence pioneer won the world's top computing prize

Jason Koebler

U.S. News & World Report

March 19, 2012

Judea Pearl, a pioneer in the field of artificial intelligence, won the Association for Computing Machinery's A.M. Turing award Thursday, considered the highest honor in the computing world.
Pearl developed two branches of calculus that opened the door for modern artificial intelligence, such as the kind found in voice recognition software and self-driving cars.
Vint Cerf, considered one of the "fathers of the Internet," said in a statement that Pearl's development of probabilistic and causal reasoning changed the world.
"His accomplishments over the last 30 years have provided the theoretical basis for progress in artificial intelligence and led to extraordinary achievements in machine learning," he said. "They have redefined the term 'thinking machine.' "
The calculus Pearl invented propels probabilistic reasoning, which allows computers to establish the best courses of action given uncertainty, such as a bank's perceived risk in loaning money when given an applicant's credit score.
"Before Pearl, most AI systems reasoned with Boolean logic--they understood true or false, but had a hard time with 'maybe,' " Alfred Spector, vice president of research and special initiatives at Google, said of his work.
The other calculus he invented allows computers to determine cause-and-effect relationships.
At 75, Pearl, who is the father of slain Wall Street Journal reporter Daniel Pearl, is currently working on a branch of calculus that he says will allow computers to consider the moral implications of their decisions. U.S. News & World Report talked with him about the past and future of artificial intelligence.
Artificial intelligence has improved by leaps and bounds over the past few years--what's the greatest hurdle for scientists working on making machines more human like?
There are many hurdles. There's the complexity of being able to generalize, an array of technical problems. But we have an embodiment of intelligence inside these tissues inside our skull. It's proof that intelligence is possible, computer scientists just have to emulate the brain out of silicon. The principles should be the same because we have proof intelligent behavior is possible.
I'm not futuristic, and I won't guess how many years it'll take, but this goal is a driving force that's inspiring for young people. Other disciplines can be pessimistic, but we don't have that in the field of artificial intelligence. Step by step we overcome one problem after the other. We have this vision that miraculous things are feasible and can be emulated in a system that is more understandable than our brain.

What do you think is the most impressive use of artificial intelligence that most people are familiar with?
I think the voice recognition systems that we constantly use, as much as we hate them, are miraculous. They're not flawless, but what we have shows it's feasible and could one day be flawless. There's the chess-playing machine we take for granted. A computer can beat any human chess player. Every success of AI becomes mundane and is removed from AI research. It becomes routine in your job, like a calculator that performs arithmetic, winning in chess--it's no longer intelligence.
So what's next? What are people working on that'll be world changing?
I think there will be computers that acquire free will, that can understand and create jokes. There will be a day when we're able to do it. There will be computers that can send jokes to the New York Times that will be publishable.
I try to avoid watching futuristic movies about super robots, about the limitations of computers that show when the machines will try to take over. They don't interest me.
Do you think those movies scare people off? Are they detrimental to the field?
I think they tickle the creativity and interest of young people in AI research. It's good for public interest, they serve a purpose. For me, I don't have time. I have so many equations to work on.
What are you working on now?
I'm working on a calculus for counterfactuals--sentences that are conditioned on something that didn't happen. If Oswald didn't kill Kennedy, then who did? Sentences like that are the building blocks of scientific and moral behavior. We have a calculus that if you present knowledge about the world, the computer can answer questions of the sort. Had John McCain won the presidency, what would have happened?
Sort of like an alternative reality?
It's kind of like an alternative reality--you have to give the computer the knowledge. The ability to process that knowledge moves the computer closer to autonomy. It allows them to communicate by themselves, to take a responsibility for one's actions, a kind of moral sense of behavior. These are issues that are interesting--we could build a society of robots that are able to communicate with the notion of morals.
But we don't have to wait until we build robots. The theory of econometric prediction is changing because we have counterfactual calculus. Should we raise taxes? Should we lower interest rates? If the government raises taxes, will that pacify the unions? It's been a stumbling block for the past 150 years. We can assume something about reality before we take an action.

Computer Competes in Crossword Tournament

Written by Alex Armstrong
Monday, 19 March 2012 10:56
Can a computer program beat the best human crossword puzzle solvers? Not yet according to the results of last weekend's American Crossword Puzzle Tournament in which the computer was foiled by the ingenuity of the human puzzle setters.
Artificial intelligence can outwit humans at chess and at knowledge quizzes, as demonstrated by IBM's Watson in the Jeopardy TV show last year, but it's not yet honed the subtle skills required to beat us at solving crossword puzzles.

Dr.Fill, a computer program devised by Matt Ginsberg, an artificial intelligence scientist who also constructs crossword puzzles for the New York Times, was an unofficial entrant in the 35th annual crossword challenge in which players solve seven puzzles created especially for the event.
Dr.Fill's participation in the competition was unofficial, in that only humans can win the challenge, but it was also well publicized. Matt Ginsberg presented a talk on the inner workings of his program, whose name is wordplay on crosswording and talk show host Dr. Phil McGraw. This is a topic on which has already published an academic paper in the Journal of Artificial Intelligence Research.
Prior to the event Ginsberg had expected to do well, predicting a place in the top 30 on the basis of having excelled in most simulations of the 15 past tournaments.
However, as Steve Lohr pointed out, writing in the New York Times in advance of the contest:
Humans and machines play the games very differently. Humans recognize patterns based on accumulated knowledge and experience, while computers make endless calculations to determine the most statistically probable answer.
Lohr also quoted artificial intelligence expert and Google research director, Peter Norvig as saying:
“We’re at the point where the two approaches are about equal. But people have real experience. A computer has a shadow of that experience.”
In the event it was the ability of humans to notice and adapt to changes in established patterns that gave them the advantage.
Although Dr Fill completed the seventh and final puzzle, supposedly the most difficult one, perfectly it was stumped by the second and fifth puzzles, ones that were described as "particularly innovative". One of them included words that had to be spelled backwards and the other required answers not just across and down, but diagonally as well.
Before the contest, last year's winner Dan Feyer (who went on to win again last weekend) said he expected that the contest would include "a puzzle or two that involved innovative twists or patterns to trip up Dr. Fill."
So were puzzles chosen deliberately to put the computer program at a disadvantage? Tournament organizer Will Shortz shook his head and smiled when that question was put to him.
So given these novel twists which Dr Fill could not have anticipated, the ranking of 141st out of 600. which places it in the top fourth of human contestants, isn't too shabby a score. Hopefully it won't deter Matt Ginsberg from pursuing his AI crossword-solving hobby. The hobby is one which fits well with his full time occupation. He is chief executive of On Time Systems whose software is used by the United States Air Force, for calculating the most efficient flight paths for aircraft. Both areas share some of the same statistical techniques - weighted CSPs (Constraint Satisfaction Problems).
Watson wins on Jeopardy
Thursday, 17 February 2011 09:03
The Jeopardy competition, in which IBM's Watson computer took on two former champions of the game designed to demonstrate human intelligence, had a conclusive outcome - machine overcame man.
Watson, the computer named after the founder father of IBM finished with $77,147, compared to $24,000 for Ken Jennings who won 74 games in a row during the show's 2004-05 season and $21,600 for Brad Rutter, another top Jeopardy contestant who has previously earned a cumulative $3.3 million on the show.
The result wasn't unexpected given that Watson had had a pretty decisive victory during the practice round in January - see AI does well on Jeopardy.

"I for one welcome our new computer overlords," Jennings wrote next to his last answer, displaying one human quality conspicuously absent in Watson - a sense of humor.
Analysis of the errors that the machine made reveal a lot about the differences between the human and machine approach to the problem. It also did a reasonably convincing job of rational betting - but this is something we would expect a machine to master.
IBM is clearly pleased with the result and have lots of plans to use the Watson technology in a range of commercial AI systems - everything from diagnosis through legal advice. Overall however this isn't an AI breakthrough. It is mostly the application of fairly simple algorithms on a scale that we haven't seen before and if it proves anything it proves that if you have enough computing power even simple looks smart.
If you want to know more about Watson and the AI techniques that are involved read our in depth analysis Watson wins Jeopardy! - trick or triumph
IBM plans to donate all of Watson's winnings to charity.
Watson wins Jeopardy! - trick or triumph
Written by Mike James
Friday, 18 February 2011 07:28
IBM's Watson finished the recent contest between man and machine with $77,147, compared to $24,000 for Ken Jennings and $21,600 for Brad Rutter, another top Jeopardy! champion. This is amazing and there is plenty of talk of the "day of the machine". But wait! Watson doesn't think or understand anything. It's not even a question-answering machine - but an answer-questioning machine which is perhaps a whole lot simpler. So is it a triumph of machine over man or of publicity over fact?

IBM has pulled off a triumph of publicity, if not AI (artificial intelligence), in creating a machine that can beat seasoned players at the game of Jeopardy!. As well as the publicity from the show, IBM made campus appearances and generally generated enthusiasm among students. The result is that now IBM looks like a cool company to work for and has moved from grey and outdated to being up there with Google, Twitter and Facebook. It has also raised the public perception of AI and what computers can do beyond word processing and browsing the web. If you are semantic engineer or an expert in machine learning then you can expect to be in more demand in the future.
For this at least IBM deserves thanks but is Watson AI or is it a trick?
How AI works
AI is a strange subject because by its nature it is doomed to be perceived has having failed. Consider just how it works. A human does something like play chess or answer questions on Jeopardy! and we immediately credit the behavior as an example of intelligence. Intelligence is what humans and occasionally some animals are assumed to have without having to prove anything much apart from engaging in the activity.
Now compare this to a chess playing program. At first it looks impressive, especially when it wins, but when you examine how it does the job you discover that its an easy to follow algorithm. You can find out exactly how a chess program works in terms of searching and evaluating the next move in terms of what might happen n moves on. Even the most sophisticated variations on the algorithm seem simple and crude. Even though the program can beat a grand master, like IBM's previous AI stunt with Deep Blue, it just doesn't seem to be made of the same stuff as human intelligence.
If you ask what would be required of an AI program toimpress you enough to be worth calling intelligent then what you end up demanding is a large slice of "mystery". Every time AI succeeds in reducing a human behavior to an algorithm it immediately changes from intelligence to a machine procedure. With this in mind it is time too look at Watson.

Watson - a statistical approach to AI
The reason why Watson is impressive is that to complete the task it has to bring together a range of separate AI techniques. It has to understand natural language well enough to process the question and formulate an answer. But first notice that Jeopardy! is the reverse of a standard quiz show. It provides the answers and the contestants have to formulate the questions. There are also elements of question selection and betting to be mastered but the main AI task is to formulate a question given the answer.
This task would seem to need complete understanding at a very human level but you would probably be wrong. The key idea in most of the really successful applications of AI in the last few years has been statistics.The statistical approach to AI may have produced many successes - Google Translate for example - but many regard it as being unsatisfying in the sense described earlier.
Suppose we have the very simple AI task of writing a program to guess the number you have just thought of - yes it's silly but illustrates the point. A valid AI approach would be to try to use subtle hints from your psychology and recent experiences to formulate a model of your cognitive functions and so work out your most likely numeric selection. The statistical approach would simply get you to play the game millions of times and work out statistically what number was most likely. The statistical approach to language understanding has only recently become possible because of the huge amount of language data that the web provides.
What Watson does is to take the input question and use some syntax analysis on it, but not with the intention of understanding the question - just to split it into functional fragments. These are then used to discover what the question is by a statistical process of searching the knowledge base for something that has entries that correspond to the data in the question. When an entity is found that matches the features of the question then it is considered as a possible answer. A set of heuristic confidence levels are computed and the final answer is constructed from the entity that has the best confidence level. The confidence level is also used to determine if Watson should buzz in or not.
For example suppose the question was:
Category: "Rap" Sheet
Clue: This archaic term for a mischievous or annoying child can also mean a rogue or scamp.
Then the clue is processed to create fragments such as "mean" "rogue or scamp" these are then looked up in the knowledge base and if an entry has "rogue or scamp" or "mean(s)" and "rogue or scamp" then the entry is the possible subject of the answer.
Lexical Answer Type
Notice that this sort of matching would be done on multiple fragments and syntax would be used to guide the sort of results that rank highest. It is all a question of working out what the entry is that the information is about - the LAT or Lexical Answer Type. In this case the LAT is "This archaic term" i.e. we are looking for a word. Knowing the LAT allows Watson to pick out the the item in the matched record that is the answer. In the case of this example the entry has to be a word - Rapscallion in this case. Watson can then perform a simple transformation to get the question form of the answer "What is Rapscallion?".
Of course this misses out lots of the detail in what Watson is doing but it gives you flavour of the overall approach. The categories that the questions fall into can be used to narrow down the LAT. The question also needs to be treated in different ways depending on its overall type. For example,