2 Comments
Mar 14·edited Mar 14

Sam Harris did a fantastic podcast about AI recently. One thing I found was particularly interesting was their explanation of how it traverses statistics, rather than learns real concepts. ChatGPT has seen single digit addition frequently, so it knows 3 + 4, etc. Similarly if you have a three digit number added to another three digit number with no carry over, it can do that. But it has trouble with the carry. Reason being that it doesn't understand addition specifically, it understands pattern matching and what is the most likely answer given its training set. For this reason, it has no idea about multiplication and will hallucinate results once you step outside it's training data.

This point was further driven home by the recent vulnerability in AlphaGo, the Go playing AI, they found. Knowing that it didn't deeply understand the concepts of the game, and instead had studied moves, researchers concocted a scenario that was unlikely to have been seen before, and utilised a key concept of the game. They were able to beat AlphaGo this way.

That lack of deep conceptual understanding, I believe, is our main advantage as engineers over AI. AI can clearly generate boilerplate and simple code faster than a software engineer. What AI can't do is reason about the problem space, understand the domain, make trade-off decisions, and ultimately ensure that the code being written serves the user value as intended.

In my view, this complex concept and domain understanding is a uniquely human task, and could only be assisted by AI, rather than surpassed by it.

Expand full comment