Teaching a Machine to Learn
“Researchers teach a machine how to learn like a human” – sounds like something out of a Frank Herbert sci-fi novel, right? Not quite. A group of scientists have developed an algorithm that encapsulates our learning abilities and recapitulates them in computers. These computers are able to recognize and draw novel visual concepts that are mostly indistinguishable from those created by humans, demonstrating an improved capacity for information acquisition and retrieval. The work, published in the latest issue of Science, represents a major advance in the field: a significant shortening in the time it takes computers to consolidate new concepts and apply them in new areas.
Brendan Lake, lead author and Moore-Sloan Data Science Fellow of New York University, states, “Our results show that by reverse engineering how people think about a problem, we can develop better algorithms. Moreover, this work points to promising methods to narrow the gap for other machine learning tasks.
Humans in general require a relatively small amount of information when learning new things/concepts: a new dance, kitchen tool, or unfamiliar words, for example, don’t require much reiteration. Machines on the other hand, need thousands of such examples to properly “learn” the new material and replicate human-level pattern recognition. In seeking to change this, the authors developed a “Bayesian Program Learning” framework, in which concepts are represented by computer code. However, unlike prior work, the developed algorithm programs itself and does not require multiple inputs to function. Furthermore, the algorithm is probabilistic – each iteration will yield a different result – which is useful in the application of new concepts to new areas.
To put their model to the test, authors asked both humans and the computer to generate a series of handwritten characters after having seen a single example character. Judges were asked to differentiate between the human and computer constructions, and fewer than 25% of judges performed significantly better than guessing by chance at whether a machine or human had generated the character.
To Joshua Tenebaum, a Professor at MIT and senior author of the study, the study marks significant progress in the field, “Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven't seen. I've wanted to build models of these remarkable abilities since my own doctoral work in the late nineties. We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts—even simple visual concepts such as handwritten characters—in ways that are hard to tell apart from humans."
Now, though modern technology is not quite ready for self-conducted machines, the present study is a meaningful advance in the field and demonstrates the ability of artificial intelligence to learn from situations that present with a dearth of data.