Science just took us a small step closer to HAL 9000.
A new artificial intelligence (AI) program designed by Chinese researchers has beat humans on a verbal IQ test.
Scoring well on the verbal section of the intelligence test has traditionally been a tall order for computers, since words have multiple meanings and complex relationships to one another.
But in a new study, the program did better than its human counterparts who took the test.
The findings suggest machines could be one small step closer to approaching the level of human intelligence, the researchers wrote in the study, which was posted earlier this month on the online database arXiv, but has not yet been published in a scientific journal.
IQ isn't the end-all, be-all measure of intelligence
Don't get too excited just yet: IQ isn't the end-all, be-all measure of intelligence, human or otherwise.
For one thing, the test only measures one kind of intelligence (typically, critics point out, at the expense of others, such as creativity or emotional intelligence. Plus, because some test questions can be hacked using some basic tricks, some AI researchers argue that IQ isn't the best way to measure machine intelligence.
Intelligence Quotient (IQ) Tests, an idea first proposed by German psychologist William Stern in the early 1900s, usually consist of a standard set of questions designed to measure human intelligence in logic, math and verbal comprehension. The verbal questions usually test a person's understanding of words with multiple meanings, synonyms and antonyms, and analogies (for example, a question might ask fo! r the multiple choice answer that best matches the analogy "sedative : drowness.")
Only a handful of computer programs for solving IQ tests exist, which could make this new achievement a pretty big deal.
Bin Gao, a computer scientist at Microsoft Research in Beijing, and his colleagues developed the new AI program specifically to tackle the test's verbal questions.
First, they wrote a program to figure out which type of question was being asked. Next, they found a new way to represent the different meanings of words and how the words were related.
They used an approach known as deep learning, which involves building up more and more abstract representations of concepts from raw data. (For example, Google uses deep learning in its search and translation features.) The researchers used this method to learn the different representations of words, a technique known as word embedding.
Finally, the researchers developed a way to solve the test problems.
The researchers gave a set of IQ test questions to their computer program and to a group of 200 people with different levels of education, recruited using Amazon Mechanical Turk, a crowdsourcing platform.
Still, the results are striking
The AI's results were striking. Although it scored better than the average person in the study, it didn't fare so well against some participants, such as people 30 and over and people with a master's or doctorate degree.
Other scientists not involved in the study praised the findings, but cautioned that they were just baby steps for now.
Robert Sloan, a computer scientist at the University of Illinois at Chicago, said the Chinese AI's performance was a small step forward, but noted that these kinds of multiple choice questions are just one type of IQ test, and may not be comparable to the kinds of open-ended reasoning tests administered to students by trained psycholo! gists.
Within AI, "the places where so far we’ve seen very little progress have to do with open dialogue and social understanding," Sloan told Business Insider. For example, if you ask a child what to do if they see an adult lying in the street, you expect them to call for help. "Right now, theres no way you could write a computer program to do that," he said.
In 2013, Sloan and his colleagues developed an AI that scored the same on an IQ test as a human four-year-old, but the program's performance was extremely varied. If a child varied that much, people would think something was wrong with it, Sloan said at the time.
Hannaneh Hajishirzi, an electrical engineer and computer scientist at the University of Washington in Seattle who designs computer programs that can solve math word problems, also found the results interesting. The Chinese researchers "got interesting results in terms of comparison with humans on this test of verbal questions," she said, adding that "we're still far away from making a system that can reason like humans."
So maybe AI isn't about to take over the world, as Stephen Hawking and others might have us believe. But at the very least, we'll end up with computers that are really good at making analogies.