Machine Learning: Five Categories, Their Strengths and Weaknesses
Machine learning started in the 1950s with the work of pioneering scientists such as Frank Rosenblatt, who built an electronic neuron that learned to recognise digits, and Arthur Samuel, whose checkers program learned by playing against itself until it could beat some humans. But it’s only recently that the field has truly taken off, giving us self-driving cars, virtual assistants that understand our commands and countless other applications.
Every year we invent thousand of new algorithms, which are sequences of instructions telling a computer what to do. The hall of learning machines, however, is that instead of programming them in detail, we give them general goals such as “learn to play cheaters.” Then, like humans, the machines improve with experience.
These learning algorithms tend to fall into five main categories, each inspired by a different scientific field. Here they are:
Unsurprisingly, one way that machines learn is mimicking natural selection, though evolutionary algorithms. Primitive robots try to crawl or fly, and the specifications of those that perform best are periodically mixed an mutated to 3-D print the next generation. Starting with randomly assembled bots that can barely move, this process eventually produces creatures sun as robot spiders and dragonflies father hundreds and thousands of generations. But evolution is slow, so here comes the second method.
Deep learning is the most popular machine-learning paradigm. This category takes inspiration from the brain. We, humans, start with a highly simplified mathematical model of how a neuron works, and than build a network from thousands or millions of these units. Then we let it learn by gradually strengthening the connections between neurone that fire together when looking at data. Machine learning based on this approach can lead to recognition of faces, understanding speech and translating languages with uncanny accuracy.
Machine learning also draws on psychology. Like humans, the analogy-based algorithms solve new problems by finding similar ones in memory. This ability allows for the automation of customer support, as well as e-commerce sites that recommend products based on your tastes.
Machines may also learn by automating the scientific method. To induce a new hypothesis, symbolic learners invert the process of deduction: If I know that Socrates is human, what else do I need to infer that he is mortal? Knowing that humans are mortal would suffice, and this hypothesis can then be tested by checking if other humans in the data are also mortal. Note that Eve, a biologist robot at the University of Manchester in England, has used this approach to discover a potential new malaria drug. That’s starting with data about the disease and basic knowledge of molecular biology, Eve formulated hypotheses about what drug compounds might work, designed experiments to test them, carried out the experiments in a robotic lab, revised or decades the hypotheses, and repeated until it was satisfied.
Finally, learning can rely purely on mathematical principles, the most important of which is Bayes’s theorem. The theorem says that we should assign initial probabilities to hypotheses based on our knowledge, then let the hypotheses that are consistent with the data become more probable and those that are not become less so. It then makes predictions by letting all the hypothesis vote, with the more probable ones carrying more weight. Bayesian learning machines can do some medical diagnoses more accurately than human doctors. They are also at the heart of many spam filters and of the system that Google uses to choose which ads to show you.
Strengths and Weaknesses of the Five Kinds of Machine Learning
Each of these five categories or kinds of machine learning has its strengths and weaknesses. Deep learning, for example, is good for perceptual problems such as vision and speech recognition but not for cognitive ones such as acquiring common sense. With symbolic learning, however, the reverse is true. Evolutionary algorithms are capable of solving harder problems than neural networks, but it can take a very long time to solve them. Analogical methods can learn from just a small number of instances but are liable to get confused when given too much information about each. Bayesian learning is most useful for dealing with small amounts of data but can be prohibitively expensive with big data.
Master Algorithm in Machine Learning
All these vexing trade-offs are why machine-learning researchers are working today toward combining the best elements of all the paradigms. In the same way that a master key opens all locks, our goal to create a so called master algorithm – one that can learn everything that can be extracted from data, deriving all possible knowledge from it.
The challenge on us is similar to the one faced by physicists: quantum mechanics ineffective at describing the universe at the smallest scales and general relativity at the largest scales, bu the two are incompatible and need to be reconciled. In the same way that James Clerk Maxwell first unified light, electricity and magnetism before the Standard Model of particle physics could be developed, different research groups have proposed ways to unify two or more of the machine-learning paradigms. Because scientific progress is not linear and instead happens in fits and starts, it is difficult to predict when the full unification of the master algorithm might be complete. Regardless, achieving this goal will not usher in a new, dominant race of machines. Rather it will accelerate human progress, as technology is simply an extension of human capabilities.
Machines do not have free will, only goals that we give to them. It is the misuse of the technology by people that we should be worried about, not a robot takeover.
In fact, the pursuit of artificial intelligence can be seen as part of human evolution. The next stage of automation will require the creation of a so-called master algorithm. It would integrate the five main ways that machines currently learn into a singly, unified paradigm.
- Pedro Domingos The Master Algorithm (Basic Books, 2015)
- Scientific American, September 2017.
- Sebastian Raschka. Python Machine Learning, 2015.
- Kevin Murphy. Machine Learning: A Probabilistic Perspective, 2012.
- Trade in your old electronic device for the highest price online.
Pedro Domingos: “The Master Algorithm” | Talks at Google [Video]
Video uploaded by Talks at Google on November 27, 2015