by Brian Geddes, Principal Software Engineer
There is a great deal of excitement around the potential of machine learning and artificial intelligence. Self-driving cars! Computerized assistants to take your phone calls! With that buzz comes the temptation to apply it to every problem under the sun. After all, who doesn’t want to be on the cutting edge? Before jumping to a solution, however, it’s essential to make sure that the pros and cons are understood.
Machine learning is a powerful tool, but as with all tools, the key to getting groundbreaking results is to dispassionately separate the implementation from the problem. Most of the time, corporations, governments, and even startups have the luxury of choice – deciding everything needs to be a machine learning solution creates inefficiencies and waste. If your objective is to achieve an easy caffeine boost Monday morning, adding machine learning to your coffee pot is going to be overkill.
Machines Learn What They’re Taught
To begin – what is machine learning? The idea is deceptively simple: instead of explicitly defining how a computer should to do a job, enable the computer to learn how to do that job.
Traditional software development relies on humans to describe exactly what a computer should do under all conditions. As computers grow ever more powerful (with the availability of cloud computing and GPUs picking up slack for the slowing Moore’s Law) and their applications correspondingly more complex, the number of rules and directives that software developers must write quickly escalates, driving up both software complexity and development cost.
Rather than attempting to cover every possibility explicitly, the premise of machine learning is to instead develop software that will allow the computer to learn in a way similar to humans – through trial, error, and feedback. We give the computer a description of what we want it to decide, a way to tell whether it did a good job, and a bunch of examples from which to learn. Enable the machine to teach itself!
There are myriad specific machine learning algorithms, but fundamentally, they all utilize a combination of math and incredible amounts of computing horsepower to repeatedly chew through training data, adjusting their approach to the problem through many iterations in order to incrementally improve accuracy. Over time the machine learning algorithm uses these training data sets to develop a set of internal rules, often more complex (or subtle!) than human programmers would conceive of. Once trained, the computer can use these rules to make decisions on new, real-life data that it’s never seen before.
Sounds great, right? Machine learning is a promising way to take advantage of the ever-growing mountains of data that our computer-infused world is creating. Netflix and Amazon have become eerily good at recommending movies and products that you might like. Zillow can give you an idea what the house down the street is worth without jumping through the hoops of talking to a realtor. And in a weightier application, machine learning has been used to predict and diagnose diseases based on MRIs and other medical records.
So, What’s the Catch?
As promising as many applications are, it is critical to be aware of the limitations. Machine learning is extremely sensitive to the quality of the data with which you train. Data is easy to find, but high-quality data is much harder. To get good results, developers need to give the computer the right examples to learn from. That means ensuring that the data are well-structured and representative of the problem space, which can be very labor-intensive. In some applications simply getting enough data is tricky; if you’re tackling a problem that is new or novel and thus doesn’t have a long data trail, it may be nearly impossible to get a statistically significant set. With poor or insufficient data, we run the risk of the computer identifying patterns that aren’t real– the classic “garbage in, garbage out” problem.
Premature confidence in the results of machine learning is another potential pitfall. In one application for diagnosing cancer, the system appeared to be producing extremely reliable predictions on the initial set of data. However, the results were far less accurate when applied to additional cases. Developers discovered that the machine learning algorithms had been basing decisions partially on the name of the medical facility, deciding that patients at a hospital with “Cancer” in the name were more likely to have cancer…presumably because they had already been diagnosed and were seeking a specialist. Not an incorrect conclusion based on the initial data, but not productive when applied to a broader set of patients.
This overconfidence problem is compounded when the computer reports that it is certain in the results, as when a Google algorithm recently reported with 100% confidence that a picture of a cat was, in fact, a picture of guacamole. This is a humorous example, but quickly becomes dangerous when the computer is recommending chemotherapy.

Source: Inverse
Another limitation — especially for high-reliability systems! — is that the internal rules developed by a machine learning system are difficult for a human to examine. In traditional software, it’s normal for one developer to review the work of another, and for the two (or more) of them to talk through how things ought to work. This dialog and reasoning from first principles can expose vulnerabilities that a given software system may have even without running any tests. But because machine learning doesn’t provide a reason for why it works the way it does, it cannot be interrogated or audited in this way.
Furthermore, because the computer’s internal logic is obfuscated, it’s often not feasible to root cause and fix bugs as can be done with traditional software. Instead, the input data must be adjusted, and the computer retrained on the new (hopefully) improved data.
An Educated Guess – Not the Absolute Truth
While powerful, machine learning provides what amounts to a very educated guess, not an absolute truth. Understanding this limitation is crucial when deciding whether a problem is well-suited to machine learning. For some applications this uncertainty is fine; if you end up hating Netflix’s next recommendation it’s easy enough to watch something else…or even turn off the TV and go outside! On the other hand, you probably want some additional doctor visits and tests before scheduling surgery following an automated cancer diagnosis.
It is crucial to take a disciplined approach to ensure that your problem is well-defined and that a broad array of possible solutions considered before settling on a solution. If machine learning is to be incorporated, the system must be engineered to mitigate the limitations and risks posed by the use of machine learning and should be validated to ensure that results are acceptable under all important conditions. Machine learning is promising but must be wielded with care in order to produce reliable and trustworthy outcomes.
Most importantly, the significant investment and cost associated with machine learning must be justified through clear requirements. Is machine learning truly required? Or is it being used in an attempt to seem “innovative?” Blind testing of different applications doesn’t just waste time and money, it produces suboptimal results. As with any technology or tool, rigorous problem definition and requirement management must be in place before conversations move to the fun part: implementation.
Choose the Right Technology, Not the Flashiest
In conclusion, let’s circle back to the example of cancer diagnosis. Why was machine learning chosen as the solution? The detection and diagnosis of cancer has well-defined requirements built up over decades of treatment, physicians, and laboratory testing. Hospitals are constantly on the lookout for new technologies that satisfy their ultimate objective: detect cancer as soon as possible and treat with the correct method. A good systems engineer would have churned through hundreds of other technologies, evaluating each for its ability to satisfy requirements under the unique cost, schedule, usability, and reliability constraints, before finally landing on machine learning as the single best technology (or not).
Whether cancer, mine operations, or safe consumer products, we’d all rather have the right technology at play, not the flashiest.
Want to work with us? We’re hiring! First Mode draws on the exceptional talent and creativity of its multidisciplinary team to solve the toughest problems on and off the planet. Check out our open positions in Seattle and Perth.