New tools continue to radically improve workflows. Hugging face is horrible but a whole lot better than what existed before. The structure to a Pytorch-Lightning repo is a light and day improvement over one written in pure torch, and provides a lot of tooling around reproducability that heavily overlaps with MLOps. Dagster made the testing of pipelines far easier in the Data Engineering world.
Good ML packages (and their documentation and community) are the most efficient way to nudge users towards a better engineering and ML culture / set of processes. I find that the quality of the code I write and systems I build most closely correlates to the quality of the core APIs I build on top of, not deadlines or culture. Good systems evolve naturally, bottom up when they're easy and fast to implement, and better tooling buys you the time to spend more time (and budget - hire more business savvy people, less raw tech people) thinking about the business problem <-> technology solution fit. Tooling can help all the way to the business value, systems that link the quality of predictions to financial fundamentals help to evaluate if the project makes any sense in the first place.
The pure volume of overlapping tooling feels silly, but that's just free market capitalism, not the wrong focus. These arguments imply that there is some plateau in fundamental underlying technologies and their designs which couldn't be further from the truth.
I agree with the main thesis, but disagree with the dismissal of tooling.
You mention a couple of isolated examples but MLOps as a paradigm is primarily a system integration problem connecting multiple parts of an organisation and multiple different systems.
Operational problems are not coming from writing better models faster. They come from 20 people interacting to solve a business problem and then maintain business continuity. Or the lack of...
Iterating fast rapidly massively the cost of experimentation, which reduces sunk cost biases and increases the chance that you get better solutions to real problems.
Improvements in tooling are also key to making something which feels like a "POC" from a development perspective, also function as a piece of engineering which is able to scale to a real solution
You say "isolated examples", but it's happening with tooling for every problem relating to ML.
* Data Asset centric design which enable non-tech data consumers to see when a pipeline's failed.
* Data Catalogues which enable users to understand what data can be 'trusted' from a glance and its previous applications, and a vast improvement in data documentation.
Will admit that my bias comes from most of my professional experience being in small start-ups where an ML Model IS the product. Which is very different to having to get buy-in internally.
Your view feels a little like a tech company complaining that consumers aren't buying its product and trying to drive a cultural shift in order to get buy-in instead of just building something better?
New tools continue to radically improve workflows. Hugging face is horrible but a whole lot better than what existed before. The structure to a Pytorch-Lightning repo is a light and day improvement over one written in pure torch, and provides a lot of tooling around reproducability that heavily overlaps with MLOps. Dagster made the testing of pipelines far easier in the Data Engineering world.
Good ML packages (and their documentation and community) are the most efficient way to nudge users towards a better engineering and ML culture / set of processes. I find that the quality of the code I write and systems I build most closely correlates to the quality of the core APIs I build on top of, not deadlines or culture. Good systems evolve naturally, bottom up when they're easy and fast to implement, and better tooling buys you the time to spend more time (and budget - hire more business savvy people, less raw tech people) thinking about the business problem <-> technology solution fit. Tooling can help all the way to the business value, systems that link the quality of predictions to financial fundamentals help to evaluate if the project makes any sense in the first place.
The pure volume of overlapping tooling feels silly, but that's just free market capitalism, not the wrong focus. These arguments imply that there is some plateau in fundamental underlying technologies and their designs which couldn't be further from the truth.
I agree with the main thesis, but disagree with the dismissal of tooling.
You mention a couple of isolated examples but MLOps as a paradigm is primarily a system integration problem connecting multiple parts of an organisation and multiple different systems.
Operational problems are not coming from writing better models faster. They come from 20 people interacting to solve a business problem and then maintain business continuity. Or the lack of...
Iterating fast rapidly massively the cost of experimentation, which reduces sunk cost biases and increases the chance that you get better solutions to real problems.
Improvements in tooling are also key to making something which feels like a "POC" from a development perspective, also function as a piece of engineering which is able to scale to a real solution
You say "isolated examples", but it's happening with tooling for every problem relating to ML.
* Data Asset centric design which enable non-tech data consumers to see when a pipeline's failed.
* Data Catalogues which enable users to understand what data can be 'trusted' from a glance and its previous applications, and a vast improvement in data documentation.
Will admit that my bias comes from most of my professional experience being in small start-ups where an ML Model IS the product. Which is very different to having to get buy-in internally.
Your view feels a little like a tech company complaining that consumers aren't buying its product and trying to drive a cultural shift in order to get buy-in instead of just building something better?
(I'm being excessively contrarian)