May 4, 2023Liked by Andrew Huth

I talked about this issue with the programmer who created my webstore https://zdrowersi.pl

He asked an easy question - how many times did we talk while inventing/programming the store? How many times have you said "it should be like this", "please change it", "I'd like it a bit different", "a function like this would be useful"... as long as there is no problem-free communication with the AI, nothing can replace the human <-> human relationship.

Expand full comment
Mar 14, 2023·edited Mar 14, 2023

Sam Harris did a fantastic podcast about AI recently. One thing I found was particularly interesting was their explanation of how it traverses statistics, rather than learns real concepts. ChatGPT has seen single digit addition frequently, so it knows 3 + 4, etc. Similarly if you have a three digit number added to another three digit number with no carry over, it can do that. But it has trouble with the carry. Reason being that it doesn't understand addition specifically, it understands pattern matching and what is the most likely answer given its training set. For this reason, it has no idea about multiplication and will hallucinate results once you step outside it's training data.

This point was further driven home by the recent vulnerability in AlphaGo, the Go playing AI, they found. Knowing that it didn't deeply understand the concepts of the game, and instead had studied moves, researchers concocted a scenario that was unlikely to have been seen before, and utilised a key concept of the game. They were able to beat AlphaGo this way.

That lack of deep conceptual understanding, I believe, is our main advantage as engineers over AI. AI can clearly generate boilerplate and simple code faster than a software engineer. What AI can't do is reason about the problem space, understand the domain, make trade-off decisions, and ultimately ensure that the code being written serves the user value as intended.

In my view, this complex concept and domain understanding is a uniquely human task, and could only be assisted by AI, rather than surpassed by it.

Expand full comment

Good points!

This actually raises a question in my mind. What if eventually everyone is using AI (for example, to write code). Does that mean there's no more unique training data being generated (by humans), so the ML will never get past that local maxima?

Expand full comment