I use it all the time. It is a good partner to challenge me, when I am looking for other points of view. “I believe x due to y. Challenge my point of view”
It helps me explore a topic fast, so that I know the lingo to search for it myself. I use it for making low stakes decisions where it often succeeds, such as shopping and research for shopping. I validate the results every time.
Is it net negative for society, not sure, maybe? Will it go away, no. So we should embrace it, but not the big tech AI, but smaller LLMs.


I’m not so sure about their utility as a tool for critical thinking, though that might be just because I’ve spent most of my life training my brain to do that sort of reflection and argumentation for me. That’s obviously not the norm, so I guess if people can find utility in anti-sycophantic roleplaying LLMs to achieve a mode of thought to which they’re unaccustomed, then perhaps that might be good… But mainly:
Is exactly how I use it besides writing small scripts for me.
I think of LLMs like intuition rather than intelligence: they’re incredibly stupid and wrong and incapable of reason, intention, or thought. But they’re a vague and inaccurate amalgamation of all writing on the internet and that can be useful for doing remedial tasks or getting a rough direction to go in.
Prompting a subject can bring up associated keywords, paradigms, and frameworks niche to domain experts which can greatly accelerate my ability to know what to search for and how to think about the questions I have.
They’re damn near useless at answering them though, of course… But it helps me orient.
Often you don’t know what you don’t know. So your reflection and argumentation has to be based on something. In order to achieve your goal you also have to do research in order for it to be sufficiently valuable.
LLMs are great for finding “what you don’t know” fast.
This strengthens your ability to both reflect and research topics manually. Which should be the last stop.
I’ve used LLMs to have conversations about technical topics I’m not familiar with. I ask it how something works, it answers, and then I ask several follow-up questions to clarify various things I’m interested in.
Usually, I have some ideas how to implement a particular theory or technology, and I bounce those ideas off the LLM. Some times, my ideas have already been invented about 100 years ago, some times my ideas are impractical, and the LLM tells me exactly why they would or wouldn’t work.
I’m also using a custom agent that has been specifically tailored for this purpose. Normal LLMs are far to supportive, lack critical thinking, do not challenge my ideas etc. so that’s why I had to make my own agent prompt.
Anyway, I think this system works well for me. This way I’ve been able to dive deeper into all sorts of random topics, such as why coco powder doesn’t mix with milk, why a battery bank shows confusing state of charge readings, how fluid coupling is used in heavy machinery etc. Fascinating stuff. It’s a bit like watching a custom documentary made just for my odd interests.
If had to read about these things in magazines or books, I would not have been able to dive as deep as fast. On the other hand, books also give you a general overview, and they include details that I may not be interested in, so I would either end up reading stuff I don’t care about or just skimming those parts. In the latter case, I would end up spending hours looking for the information I care about, not finding it, and walking away with less information.