I use it all the time. It is a good partner to challenge me, when I am looking for other points of view. “I believe x due to y. Challenge my point of view”
It helps me explore a topic fast, so that I know the lingo to search for it myself. I use it for making low stakes decisions where it often succeeds, such as shopping and research for shopping. I validate the results every time.
Is it net negative for society, not sure, maybe? Will it go away, no. So we should embrace it, but not the big tech AI, but smaller LLMs.


The main issue with conversational responses from LLMs is their tendency towards confidently incorrect responses or flat out well disguised lies. It isn’t normally blatant but if 95% of what it responds is true, but stated with 100% certainty and apparent proof, how long before that other 5% starts to poison your own reasoning?
Are LLMs completely useless? No. Though challenging your world views, reasoning, and logic with systems that lie and manipulate might not be the best use of said systems.
Exactly. It’s like doing a Google search and only relying on the first result. When you point out it’s error is the only time it will seek out additional info.