Computers have long been able to do amazing things, but it sure feels like things have changed a lot recently.
ChaptGPT can answer questions (sometimes truthfully!), GitHub Copilot can write bits of code, and DALL-E can generate realistic (if sometimes disconcerting) art.
One thing I’ve been thinking about: will computer programming still be relevant in the future?
I’m not sure, one way or the other.
On one hand, I write code a lot. If a machine can ever write better code, faster than me… well, my programming skills probably won’t be useful any more.
On the other hand, it seems like much of the value I provide is doing human things: thinking, planning, communicating, etc. So I want to believe that I’ll still be programming productively for a long time. But maybe that’s juts rationalization on my part 😬.
To hopefully stay useful, here’s what I’m going to do:
Double down on the human parts of my work. Keep improving communication, leadership, coaching, etc. (And hope machines are further off from doing these…)
Start learning how AI works. Don’t let it be a magic black box that I don’t understand.
Stay on top of software development practices, including those using AI.
What do you think? Will programmers still be relevant in the future (please say yes…)? And what are you going to do to stay useful? Comment below, and subscribe!
I talked about this issue with the programmer who created my webstore https://zdrowersi.pl
He asked an easy question - how many times did we talk while inventing/programming the store? How many times have you said "it should be like this", "please change it", "I'd like it a bit different", "a function like this would be useful"... as long as there is no problem-free communication with the AI, nothing can replace the human <-> human relationship.
Sam Harris did a fantastic podcast about AI recently. One thing I found was particularly interesting was their explanation of how it traverses statistics, rather than learns real concepts. ChatGPT has seen single digit addition frequently, so it knows 3 + 4, etc. Similarly if you have a three digit number added to another three digit number with no carry over, it can do that. But it has trouble with the carry. Reason being that it doesn't understand addition specifically, it understands pattern matching and what is the most likely answer given its training set. For this reason, it has no idea about multiplication and will hallucinate results once you step outside it's training data.
This point was further driven home by the recent vulnerability in AlphaGo, the Go playing AI, they found. Knowing that it didn't deeply understand the concepts of the game, and instead had studied moves, researchers concocted a scenario that was unlikely to have been seen before, and utilised a key concept of the game. They were able to beat AlphaGo this way.
That lack of deep conceptual understanding, I believe, is our main advantage as engineers over AI. AI can clearly generate boilerplate and simple code faster than a software engineer. What AI can't do is reason about the problem space, understand the domain, make trade-off decisions, and ultimately ensure that the code being written serves the user value as intended.
In my view, this complex concept and domain understanding is a uniquely human task, and could only be assisted by AI, rather than surpassed by it.