I think I’ve reason for this is that using LLMs is basically outsourcing all cognitive capacity to a machine, so the cognitive decline is comparatively bigger.
Does using a calculator cause a cognitive decline? Absolutely, but since you still need to know the order of operations (at least with simple calculators), how to interpret brackets and such, it’s comparatively small. Same with, say, a thesaurus. You do outsource a part of cognitive capacity used to learn big words, but still need to know grammar to string sentences together.
With ChatGPT though, you literally do almost nothing. Asking a simple question is all you need. This means your brain doesn’t need to work at all, and getting used to that means it’s harder and harder to make it work when you need it to…
It’s more insidious than that. It takes over all the tedium of the cognitive process, but it can’t actually accomplish the task (unless the task is basically boilerplate of the English or programming variety). So, unless you have pretty firm discipline to do for yourself what it “could do if you just give it a couple tries,” you’re stuck unable to really get your focus going but also unable to have the thing do the work for you.
I’m pretty sure that, now that I look at it, I am often slower working with the LLM depending on the task. I still think it can help enormously but you definitely have to be watchful of how much you’re having it do and whether or not it is really helping. That’s not even addressing the issue of technical debt (someone writing code that “works” but hasn’t been well thought through in terms of ramifications is the whole reason software sucks… LLMs are not helping that problem, at all.)
I think I’ve reason for this is that using LLMs is basically outsourcing all cognitive capacity to a machine, so the cognitive decline is comparatively bigger.
Does using a calculator cause a cognitive decline? Absolutely, but since you still need to know the order of operations (at least with simple calculators), how to interpret brackets and such, it’s comparatively small. Same with, say, a thesaurus. You do outsource a part of cognitive capacity used to learn big words, but still need to know grammar to string sentences together.
With ChatGPT though, you literally do almost nothing. Asking a simple question is all you need. This means your brain doesn’t need to work at all, and getting used to that means it’s harder and harder to make it work when you need it to…
It’s more insidious than that. It takes over all the tedium of the cognitive process, but it can’t actually accomplish the task (unless the task is basically boilerplate of the English or programming variety). So, unless you have pretty firm discipline to do for yourself what it “could do if you just give it a couple tries,” you’re stuck unable to really get your focus going but also unable to have the thing do the work for you.
I’m pretty sure that, now that I look at it, I am often slower working with the LLM depending on the task. I still think it can help enormously but you definitely have to be watchful of how much you’re having it do and whether or not it is really helping. That’s not even addressing the issue of technical debt (someone writing code that “works” but hasn’t been well thought through in terms of ramifications is the whole reason software sucks… LLMs are not helping that problem, at all.)