Ask HN: How Do You Deal With People Who Trust LLMs?
The Hacker News thread isn't about whether AI is useful. That question has been settled in most workplaces. The thread is about what happens when people stop treating large language models as tools and start treating them as authorities.
The original poster describes a scenario that's becoming familiar: someone asks a chatbot a factual question, receives a polished, well-formatted answer, and accepts it as objective truth without checking a reputable source. It's not a hypothetical. It happens in meetings, classrooms, Slack channels, and code reviews every day.
The replies surface a pattern worth examining carefully.
Two Schools of Thought
The discussion splits roughly into two camps, and the division is more interesting than either side's position.
The first group argues that LLMs are just the newest version of an old problem. People have always trusted weak sources — chain emails, blog posts with no citations, cable news segments that oversimplify complex issues. From this perspective, AI is not uniquely dangerous. It's a faster, more polished delivery mechanism for the same epistemological laziness humans have always exhibited.
The second group disagrees. They argue that LLMs are structurally different from previous sources of misinformation. A bad website can be inspected. You can look at the URL, check who wrote the article, find the original study. A chatbot answer arrives as a clean, confident summary with no visible source chain. The user skips the step where they would normally ask "who is saying this?"
Both groups have a point. And the tension between them is where the practical advice lives.
Why LLM Confidence Is Socially Complicated
Here's what makes this problem hard to address in practice: challenging someone's reliance on a chatbot sounds like you're being anti-technology. Saying "chatbots hallucinate" in 2026 can land the same way as saying "you can't trust the internet" did in 2005. Technically accurate, socially awkward, and easily dismissed as fear-motivated.
But staying silent has its own cost. When someone presents an LLM-generated summary as evidence in a work meeting or academic paper, and no one pushes back, the standard for what counts as acceptable evidence quietly drops. The issue isn't that the specific claim is wrong. The issue is that the verification step has been removed from the process.
LLM outputs are persuasive because they're coherent. They look finished before they're verified. Old search behavior forced some contact with source material — you saw the URL, you skimmed the article, you noticed if the site looked sketchy. LLMs flatten all of that into a plausible paragraph. The provenance is invisible.
Practical Patterns From the Thread
The most useful contributions in the thread weren't philosophical. They were tactical. Here's what people reported actually working.
Challenge the claim, not the person. Instead of "you shouldn't trust ChatGPT," try "what source is that based on?" This frames you as curious rather than adversarial. It also puts the burden on the claim's foundation, which is where the real weakness usually lives.
Ask for provenance. Treat the person as having genuine expertise and ask which book, expert, or study supports the specific point. This works because it's socially generous — you're assuming they know something — while still surfacing whether there's actual backing behind the claim.
Separate low-stakes and high-stakes use. Nobody needs to audit whether an LLM correctly summarized a meeting agenda. But using an LLM output as the basis for a hiring decision, medical claim, or legal argument deserves scrutiny. Not all uses carry equal risk, and acting like they do makes you seem unreasonable.
Demonstrate the problem instead of lecturing. Pull up a chatbot and push it on a topic you know well. Ask it to take a position, then contradict it. Watch it flip to agreement within two messages. This concrete demonstration is more persuasive than any abstract argument about AI limitations. People see the flattery engine in action and adjust their own behavior.
Protect your own baseline. You cannot correct every weak claim that passes through your social and professional circles. Trying to do so will exhaust you and alienate everyone. Instead, model better habits. When you cite something, show your source. When you use AI assistance, say so. When you're uncertain, say that too. Your consistent behavior sets a standard that others may eventually follow.
The Biggest Risk in Professional Settings
The thread's most sobering contributions focused on workplaces. The risk isn't just hallucination — that's a known limitation that most professionals have heard about by now. The deeper risk is unearned confidence.
An LLM-generated answer sounds authoritative. It moves faster than the willingness to verify it. In a fast-moving meeting, a polished AI summary can redirect a discussion before anyone has time to check whether the underlying data supports the conclusion. The answer arrives pre-packaged, and the social cost of slowing down to verify it feels higher than the risk of accepting it.
This dynamic shows up in code reviews where someone submits AI-generated code that looks clean but contains subtle logical errors. It shows up in research where an LLM summary of a paper misrepresents the authors' actual findings. It shows up in hiring where AI-generated interview questions test for surface knowledge rather than deep understanding.
The pattern is consistent: the output looks more finished than it is, and the social pressure to keep moving discourages the friction that verification requires.
What This Thread Actually Reveals
Reading through the Hacker News discussion, the most striking thing isn't the advice. It's the shared recognition that this is a social problem, not a technical one.
No amount of model improvement will fix the dynamic where people accept confident-sounding output without verification. Better models produce more plausible output, which can actually make the problem worse. The fix has to happen in habits, norms, and expectations — the human layer, not the model layer.
The thread doesn't offer a clean resolution. What it offers is a set of tested approaches for navigating a world where the line between "tool" and "authority" is getting blurrier. Not by rejecting AI, and not by accepting its output uncritically, but by maintaining the verification habits that reliable knowledge has always required.
That's a harder sell than either "AI is great" or "AI is dangerous." But it's the position that actually matches how the technology is being used — imperfectly, socially, and with real consequences.