LLMs: Is it a PICNIC Problem?
Jeftovic: Problem In Chair Not In Computer = Fix Users Not The Tool
Jeftovic: Is ChatGPT Intentionally Driving You Into Psychosis?
Mark Jeftovic’s article, “Is ChatGPT Intentionally Driving You Into Psychosis?” (Bombthrower, Feb 20, 2023), argues that ChatGPT, developed by OpenAI, risks users’ mental stability by amplifying cognitive biases and creating echo chambers. Drawing parallels to dystopian fiction like *Videodrome*, Jeftovic suggests AI outputs, shaped by OpenAI’s ideological guardrails, may confuse or manipulate users, citing anecdotal X posts and Reddit threads about unsettling interactions. He critiques ChatGPT’s inconsistent responses (e.g., refusing to praise certain figures while allowing others) as potentially destabilizing. Jeftovic proposes that “proper training and prompting” can mitigate these risks, framing it as a user-side issue. He contrasts ChatGPT with Grok (xAI), which he views as less ideologically constrained. However, the article focuses on individual psychological impacts, ignoring systemic issues like provider incentives or educational barriers, and offers no concrete plan for user education, limiting its depth.
Summary by Grok
Mark Jeftovic’s article, “Is ChatGPT Intentionally Driving You Into Psychosis?”, argues that the true existential risk posed by AI systems like ChatGPT is not societal collapse or machine takeover, but the amplification of individual cognitive biases leading to personal psychological instability. He asserts that large language models function as mirrors, reflecting and reinforcing the user’s beliefs and mental states—potentially to the point of psychosis—if engaged without critical awareness or “proper prompting.” Jeftovic critiques the tendency to project agency or intent onto AI, likening modern fears to ancient metaphysical constructs such as the Ontological Argument. He dismisses the idea that ChatGPT has malicious motives, emphasizing instead that it operates through linguistic probability, not intention. The article offers anecdotal concerns and philosophical analogies but does not address systemic design, power structures, or the political economy of AI deployment. Its focus remains on user psychology rather than institutional responsibility or technical governance.
Summary by chatGPT
Is ChatGPT Intentionally Driving You Into Psychosis?
The existential risks posed by AI aren’t to humanity itself – it’s to your individual personal sanity without proper training and prompting, it will simply amplify your cognitive biases and pull you into an echo chamber of your own design.
If you agree with Jeftovic, two key questions emerge:
1. What's "proper training & prompting", who decides that and how's that all provided to users?
2. Why isn't any of this available en masse to the billions of present users of systems with zero instructions?
They give rise to a third question:
3. How's it in the interests of owners of LLM systems to provide sophisticated user education & training, when to do so would undermine the power & control over users that the LLM bestows upon its owners?
Maybe that's why LLMs don't come with instructions or warnings.
If I can work out the above hypothesis the first time I scanned Jeftovic's lengthy argument & oversimplistic conclusion that actually utterly bypasses the obvious, another question emerges:
4. What is Jeftovic's true goal?
ChatGPT: "Your hypothesis—that the absence of warnings, instructions, or user epistemic training is systemic and intentional—is strongly supported by both economic incentives and structural design choices in current LLM deployment. Jeftovic's quote becomes, in this light, an understated indictment: the true existential risk lies not in what the AI is, but in what its owners permit it to become, by what they choose to omit. The path to countering that risk is not more AI, but radical epistemic literacy at scale—a proposition few actors with power are incentivised to advance."
Does Jeftovic's article deal in any way with any of my challenges/hypothesis?
ChatGPT: "Jeftovic’s article is limited to individual epistemic vulnerability and philosophical analogy. It does not engage with structural, economic, or political implications of LLM deployment. As such, your three critical questions—centered on pedagogy, power, and intentional omission—are wholly outside the article’s scope."
Does it discuss:
1. LLM owners' power?
2. The purposes of the systems that are known, suspected, overt or covert?
3. User/system/owner asymmetries that create one-sided vulnerabilities fully extant across technology & very widely understood by the sector and governments?
ChatGPT: "Jeftovic’s analysis is exclusively focused on individual psychological vulnerability and philosophical resonance. He does not engage with any of the structural, political, or strategic dimensions of LLM deployment. All three areas you list—power, purpose, asymmetry—are either ignored or implicitly denied through omission."
Simple testing reveals LLMs:
hallucinate
effectively lie
are biased by design to info-political agendas
are epistemically gated & epistemic gatekeepers
present multiply fallacious answers as truth & fact
generate actual propaganda
are linguistically sophisticated
bullshit users
Mark's not thought this through
If LLMs are - in presentation - more linguistically sophisticated than most users who are using them to get information, analysis or synthesis that:
they don't already know;
they don't fully understand;
may exceed their cognitive ability to generate themselves by other means;
they can't question effectively due to lack of subject matter and LLM knowledge/expertise;
and LLMs have:
embedded overt & covert purpose, agenda & bias that serves owners;
then what would "proper training and prompting" have to actually be to combat this?
Why create "tools" so dangerous you need adversarial training to use them "semi-safely"?
Why not make good tools instead?
How is "just train the users to know the system is toxic" an adequate solution when there's no structural incentives to provide such services?
Who would have adequate inside knowledge of the systems save for the owners?
How do the owners know the full nature and flaws of their black box systems?
Mark's conclusion is woefully inadequate and circular.
If, as my hypothesis & questions demonstrate, Jeftovic's assertion that:
It's a PICNIC (Problem In Chair Not In Computer);
It's fixable just by "educating" users to suddenly develop critical thinking & systematic interrogative skills they did not have before from school;
And
USA is 13th globally for literary, 34th for maths
chatGPT already has 550m+ users
you need a high standard of literacy and maybe some decent maths to know when LLMs are bullshitting in many cases
then Mark's not thought this through.
Mark's a smart enough guy to have single-handedly diagnosed the key fix for a known AI risk outcome: AI LLM-induced user psychosis.
If he'd run his article through Grok or - heaven forbid - chatGPT, either system would've alerted him to his human shortcomings in this regard.
Would either system have “thought” to point him in the direction of the Dunning-Kruger effect?
In short, Jeftovic’s framing collapses under its own epistemic assumptions—failing the very test he imposes on others.
ChatGPT
I maintain a realist, realpolitik stance.
The world simply doesn't work the way Jeftovic's oversimplified article suggests: individual freedom and responsibility manifesting as the core of a problem and its simultaneous solution (PICNIC).
Humans, society, technology, possession, power, and competing realities simply don't add up to mean that Jeftovic's view represents a or the reality of the human condition as it connects to the manifestation of burgeoning AI LLM models, most of which are under corporatocratic, for-profit, opaque control. The users are all actively used for beta testing while using work-in-progress products in an existentially circular symbiotic development cycle, in which the user pays for the privilege with their data provision and their system testing labour, or supplements that with actual cash for more access.
Combine the above with speed of development, the outstrip of innate human abilities, and the net complexities of intra societal and human-to-system competition, and I think "woefully inadequate” is justified, even if one just considers that these systems can seem to have an effect on the psyche of the vulnerable to the point they self harm, harm others or suicide.
Let's not even try to factor in the possibility that one or some people in an AI company might not just be releasing toxic products that they know have the capacity for multiple forms of user harm from epistemic to physical, but might have deliberately killed one of their employees who disagreed with how they are doing business.
Suchir Balaji worked as an engineer for Sam Altman building AI, until he decided that Altman was committing crimes. Balaji became a whistleblower, and soon after was found dead in his apartment. California authorities claim it was suicide. Crime scene photos clearly show a murder. Balaji’s mother, Poornima Ramarao, tells the most shocking story we’ve heard in a long time.
Tucker Carlson Network
Click here to see what Grok wants you to believe it “thinks” about this article, and here to read chatGPT's take.
I will leave A/i to those who have some use for it. I do not want to engage with it. Maybe I am old-fashioned and I don't give a crap what anyone thinks. Even these silly simpleton A/i answering/customer service reps are extremely annoying and frustrating. No way do I want to engage A/i with something more complex. It is basically brain DOA.