I thought this was a pretty good video. Frankly, I disagree with what a lot of people in anti-AI communities say about the superiority of doing things the hard way. I don’t think there’s anything wrong with the easy way if it gets you the same result, and I don’t think we’ve lost anything meaningful just because, say, the average person no longer has any phone numbers memorized. I think technology making life easier is a good thing.
But, as Rebecca Watson points out, so-called AI doesn’t just replace things like rote memorization. It replaces thinking. That’s dangerous, and therein lies the difference between AI and other tools.
I think there’s a really important distinction between “getting the same result” when that outcome is guaranteed and when it isn’t. Using a brick instead of a hammer to squash something will get you the same result every time. But with an LLM there’s no guarantee you’re going to get any specific outcome - will it hallucinate this time or not? - and so even if it gives you what you wanted this time, you have to account for the probability that existed it would not have done (and that you wouldn’t have known it let you down)