Why do otherwise credible professionals perfectly fine-burning their credibility by going hyperbolic on Generative AI? Nowhere is this as prevalent as in the "Death of the Software Engineer" trope.
Not a day goes by when someone comes up with a bombastic presentation -this time at CS50 (which I think is Harvard's relevant course on the topic).
There is a general thought through the presentation by taking widely different examples from various parts of CS (not just SWE) and demonstrating its "ridiculousness". Link in the comments. My thoughts on this, and let’s stick to the theme by "Let's go step-by-step".
1.) Software engineering hasn't improved since the 50s
It is still incomprehensible and generally hopeless. Demonstrated by the "Game of Life" program in Fortran from 1957 and some cryptic and long-forgotten language called APL.
Of course, it is pretty typical to mix programming (writing software) and computer science (implementation of algorithms with computers), but we rarely see such extreme hyperbole.
Then, he demonstrates the same program in Rust (another low-level language).
Why not demonstrate with JS or Python and a high-level web framework? Or a low-code platform.
“How do you reason about code like this?” - he asks. This is the point. That most code is not like this. And it is written to be “reasonable”. For code that _do_ look like this, there is a full suite of techniques to validate that it does what you think it should do.
Then immediately jumps to a typical GenAI summarisation prompt, “This is how I write programs today”. Talking about apples and oranges. Bro, I don’t know what you do for a living, but SWEs aren’t paid for writing ten-a-penny prompts…
There is this trope across the presentation that the worst types of low-level details are compared to high-level issues (prompts).
“This English statement describes an algorithm” - No, it doesn’t… It’s an _input_ to an algorithm. Of course, we can have a long argument about Universal Turing Machines, but then we just get back to the point that CS is not dead. We literally need it to have this conversation… But again, CS is not SWE…
2.) Natural language is easier to understand
No, it’s not. In the context of specification (namely, precisely writing down what a program is supposed to do), natural language is an incredibly imprecise tool. We rarely speak in detail; we are economical and focus on the big picture, or we would never get to the end of any conversation. But machines are unforgiving, and complex systems are fragile in edge cases, so specs are usually mind-bogglingly detailed so that SWEs can make sure the machines do in every case what we think they do.
3.) Trial and error with prompts is the same as trial and error in programming
Maybe I am wrong about modern software engineering, but much of this was removed with unit testing, high-level abstractions and CI/CD.
“You can’t write a manual on prompting; you just need to try out things.” I have news for you. The reason SWEs organised themselves in the 30 years he apparently missed is to avoid this and crank out software at an unimaginable scale.
People don’t appreciate how better and more productive programming become in the last 10-20 years. While at the same time, “Software ate the world”, and millions of SWEs were trained to accomplish this.
A huge amount of work went into so that low-level details are hidden from most programmers so they can focus on turning specs into code using high-level frameworks. This was supported by Moore’s Law, which made computing cheap to manage these frameworks.
4.) Programming as looking at StackOverflow
While there are a lot of unnecessary details to remember, most of the time, you don’t search SO while programming (especially for how to reverse a list, as the presenter says..). But you know, actually, type in the code to solve problems? Read the code? Which is written in a readable way? Not like a 1957 Fortran algorithm?
5.) Co-pilot is incredibly good
Yes, sure, it looks great. Yet every time I see someone demonstrating with it, they need to catch subtle errors that it does. So now you are not only looking for bugs in your code but also in someone else’s code. Tests should help you with this, but what if your tests are written by an AI as well?
“It could be truly amazing with more data and compute. And we have an abundance of those.” I think it is an open question if this is true. The code is good enough that you would only catch the errors by actually running the code. While this is probably a good idea, it is unclear if repeated executions and fixing would eventually get to the correct code and, of course, at what cost. The machine can easily run into computational complexities that are infeasible (you see CS again), attempting to consume an infinite amount of compute.
6.) ChatGPT is 10000x cheaper than SWEs
For what you are using it now. Namely sending run-of-the-mill prompts and then satisficing the results.
The other line of thought in the presentation is that SWEs don’t have any agency. They just write unreadable Fortran code or not even that, just using office perks all day.
The reason SWEs are employed and can’t really be automated is _exactly_ because they have agency. You can reason with them; they can overthink a problem, come up with a solution, and provide feedback. Now, if we anthropomorphize machines, all these can be true as well, but at the moment, there is no sign of a trajectory that will lead to this breakthrough.
As I wrote above, more mechanistic GenAI usage that will be 100% necessary for human-level code generation will require techniques that go deep into classical CS. It will need search algorithms and iterative methods while using an extremely expensive resource (the model itself). ATM, I don’t see a path for this, but of course, the field moves at extreme speed. We will get back to this question after a breakthrough.
7.) Anyone can be an SWE with ChatGPT
No need for training. In the 90s, there was a quip at the university I studied:
“Computer Science has nothing to do with Computers”.
And that makes sense. Architecture is not just about bricks…
Building complex systems requires many skills and high and low-level approaches simultaneously, organising people to gather requirements and then delegate tasks based on a plan. The actual typing of the code is just a small (but definitely crucial) part of the work.
The problem is that from the outside, it does just look like coding. That’s because code is a _medium_ that transports the meaning of the system. Not the system itself.
8.) Long rant on Abstractions and Boundaries and others
“Why would I care if the code is messy? It’s only humans that need readable code.” Well, someone still asked for that code. That person will describe what they need in the specs, and the code should implement that and just that.
But I already addressed this before. Checking code is not just looking for bugs but comparing them to (possibly) broken specs and then negotiating with all stakeholders to bring code, clients, specs and everything else in alignment.
9.) Only professors cared about ChatGPT until now
There has been a literal AI movement since forever, but definitely since 2013 (Mikolov’s Word2Vec paper). The problem is when you come for an entire industry, you are arguing with exactly those people who are in this longer than January 2023.
Somehow, people forget that “traditional” influencer techniques don’t work in professional topics because credibility is a currency in niche topics.
Also, a reference to Wargames (a 1983 movie with Matthew Broderick) is probably not a great reference for a bunch of GenX/GenZ-s… (And yes, I can _did_ get the reference)
10.) The Evolution of Programming
I kid you not; this is the entire thing:
Humans writing machine instructions (+ picture of an ancient mainframe)
C++ by Bjarne Stroustrup
…
AI-generated code
That’s it. Forget about web frameworks, database systems, interpreted languages, DSLs, HTML, CSS, Agile Movement, SoftwareCraft, Low and No-Code products and so on and so on.
I mean, if you ignore everything then, of course, you can say that there was no progress…
11.) You just tell the LM what you want, and they directly compute the result
“They interpreted my article as AI will write programs, so we don’t need programmers at all. I say something worse. Just tell the LM what you want, and it directly computes it.”
This is, again, the hyperbole that pops up everywhere. Did this guy ever speak to a customer and try to figure out what they _actually_ wanted? In about three sentences, because otherwise, the customer gets bored and disengages? That is part of the SWE’s job (or together with the PMs, but still).
Natural language is an imprecise medium, and people are messy. Translating that into specs and code is SWE.
Then, it gets into a long and contradicting part about prompt engineering (which is not engineering), LLMs being a black box and trial and error.
A huge reason why “Software is eating the world” is because SWEs figured out how to deterministically build software at a high pace in the environment that we are in. That is probably the biggest change that happened in the last 50 years and was ignored by the presenter. AI
AI competes with this, not Fortran from 1957… That needs AI to be much more predictable.
Then, pick a simple example and introduce the “Step-by-step” trick as if it were the Agile Manifesto.
Yeah, I get it. It is amazing that it works at all. But do that with LAPACK’s divide-and-conquer algorithm or YouTube’s recommender algorithm.
12.) Then into a startup pitch
Which is some RAG-based “Talk to your data” GenAI company…
13.) Not everyone needs to know programming
Electric Engineering and Computer Science are separated, so CS will separate from AI as well. “Not everyone needs to know programming and still be able to write programs.”
This does not make any sense, given even today, not everyone learns how to write code despite this is probably the most accessible and cheapest way to add value to your life.
They don’t learn it exactly because of the mental energy needed to maintain a programmer’s mindset. Turning specs into code and keep doing this for an extended period of time. Maintaining attention to detail beyond what most people care.
Also, it's not a nice move to pick on a third-world country (Ethiopia) as a place where a bank clerk doesn’t have access to SWEs. My experience is that people around the world, regardless of their background, embrace programming and making their lives better, Africa, no exception.
And pretty much that’s it. 30 min presentation, 15 min pitch, and I can’t watch 15 min of questions about this.
Takeaways
If you want to get into programming, you can ignore all the influencers about the “Death of the Software Engineer”. Just go and do it. Generative models will make your life easier, not replace you.