The Primary Advantage of Machine Learning
Despite all the hype and broken promises, Machine Learning has an incredibly strong advantage: It is a declarative paradigm. I will explain what this means and why is it beneficial but first we need to understand what the alternative is.
The Imperative Paradigm
The goal of any computer system is to solve a problem. This requires three steps: specify what the problem is, implement a solution, then verify if the solution does actually solve the problem as intended. Traditionally (before Test Driven Development) this was done by writing a specification (in English) implementing a solution (in a Computer Language), then using it and figuring out if it fulfils the specification or not. The solution itself is a series of steps each mapped to a specific command of the computer and the computer executes them in the order the programmer intended.
The program can be tested if it fills the specification only if the development is finished. And the only way to check if it fits the specification is to manually use the program the way it is intended and see if it works. If the requirements change some of the commands should be rewritten and the whole test should be redone. This is the imperative paradigm: The programmer explicitly orders the computer to do one thing and one thing only. This is clearly not a great way to do things iteratively.
The Declarative Paradigm
A different methodology was advised: first the specification is written and implemented in a Computer Language and not just in English. Then the solution is created, and while it is being written it is continuously tested if it fulfils the part of the specification that it is intended to solve. This is the declarative paradigm: The programmer declares what the program should do, then writes a program that fits this declaration. The paradigm operates not just on the test level, but also internally in the program: parts of the solution are declared but implemented later. This leads to modular code that encapsulates different parts of the system. It can be developed incrementally and tested incrementally.
Advantages
The main advantage of this is separating the solution from the problem. Under the hood the solution can be changed in any way until it can be verified if it is still solving the same problem. And with this we arrived to Machine Learning.
Creating Machine Learning models have three main steps:
Data cleaning and labelling -> Specification
Model implementation and training -> Implementation
Performance evaluation -> Verification
Declarative paradigm enables to separate the problem from the solution. The domain team based on their (human) knowledge can specify the problem while the Machine Learning Engineers implement solution(s) to be verified. Each team can make changes separately and verify if they are still on the same page. Feature requests can arrive in the form of newly labelled data. New architectures and different models can be tested against the labelled data. Labelling (specification) and evaluation (verification) is a fundamental part of the model development process just like testing in Test Driven Development.
What’s next?
Machine Learning is usually employed in situations where the specification of the model is very hard in explicit terms, in fact this is the guideline using ML: use it only when you ran out of alternative algorithmic solutions. It is completely natural that the specification will need to be changed in the future. To fully exploit the benefits the following is needed:
Ongoing management and incremental improvement of datasets
ML frameworks that enable these incremental improvements
This is where ML projects often fail: Data labelling is a chore and an afterthought. Selected statistical model family doesn’t allow granular changes. ML engineers need to resort to feature engineering or hyperparameter search which have limited scope. Deep learning frameworks which are more like programming languages on the other hand allow a wide range of options.
Summary
It is clear that the declarative principle is the main strength of Machine Learning as it separates specification and implementation. To exploit this one needs to think of dataset management as a fundamental part of modelling and the necessity of deep learning frameworks.
Thanks for reading this and please share and subscribe: