7 Comments
User's avatar
User's avatar
Comment deleted
Mar 20
Comment deleted
Expand full comment
Ignasz Semmelweisz's avatar

You need the verbose answer to identify elements of method, logic and various forms of error or bias. Without it, you would end up asking more and more "how" and "why" questions.

My prompts are as pithy as I can make them a. because I have a clear objective to pursue b. the method requires minimum data input by me i.e. none in order to test whether the AI has the inherent i.e. pre-existing data within it to give what I can prove to be a "more right answer" than it originally gives. This is exposed through socratic questioning, which is what I am doing. Even where I branch off in questions, the questions and the answers all drive towards the same objective and set up the AI to expose whether it has the data and capability of being able to give the "righter" answer from the get go.

So far, bias and other "flaws" overrides the AI's inherent capability to be righter from the start.

This kind of this is part of jailbreaking. God Mode etc that people are pursuing.

However, it is the default and vanilla version that the masses are using and that's why my tests are run in it and not in a jailbreak mode.

Expand full comment
RobertSchumannResonance's avatar

I’m sure that many of your readers have had similar confrontations with AI models in their areas of subject matter expertise.

The degree of embedded bias in handling information sources is a flaw that, uncorrected, will render AI as useless for objective purposes as web search and all controlled media have now become. AI’s pervasive integration into digital distribution channels will only reinforce propagandistic narratives upon the credulous.

Recent experience has given us three notable instances - Covid, Ukraine, and Gaza - where official and self-interested entities have aggressively pushed error and disinformation as incontrovertible fact. In each instance, large numbers of human deaths have arisen and been ignored or morally excused on the basis of narratives grounded in false information that the system will not permit to be challenged.

Is AI not appearing to be a black box constructed on very similar principles of information management to those that have caused so much real world disaster? How many more human deaths might arise in future from AI’s propagation and reinforcement of false beliefs?

Some might argue that source bias is unavoidable in real life - we are constrained to assign authority, and thus implicit reliability, to sources of information in areas where our subject matter expertise is partial or limited. When this produces harmful outcomes, it’s an unintended bug rather than a design feature. Where we see this in generative AI, the same applies - these are teething troubles that are correctable with time.

It’s difficult, however, to avoid the uncomfortable sense of similarity between generative AI models and the tendentious selectivity and manipulative handling of reality indulged in by politics, media, and the security services. Those of a skeptical cast of mind can wonder whether these entities might have had behind the scenes involvement in formulating and realizing approaches to some AI models, overriding objective computer science, epistemology, and ethics.

Incidentally, I wonder how this line of Socratic inquiry would have proceeded with DeepSeek or with one of Andrew Torba’s models …

Expand full comment
Ignasz Semmelweisz's avatar

Thanks for your interest, time taken reading the article and your insightful comment.

I agree with your concerns and suspicions. I refer you to a previous VST article that raises similar concerns:

https://veryslowthinking.substack.com/p/death-of-epistemology-ais-true-purpose?utm_source=publication-search

to wit:

"What is the point and purpose of a form of search engine or fundamental epistemological tool that:

-Further divorces the user from primary knowledge?

-Lies to the user in a way that cannot be detected by the user unless the user already has the knowledge he's using the tool to find?

-Will wilfully trap the user in a loop?

-Doesn't say “I don't know”?

Finally, why would the people controlling such tools take a word, "hallucination", then use it in the place of the word "lie" when those two words absolutely are not synonymous?

Welcome to the death of epistemology."

I've been in this frame of mind for a long time regarding AI and have been playing with some of te available models in certain ways that expose their shortcomings very quickly.

One of the things I did was upload completely original material for literary, conceptual and structural analysis i.e. I was uploading badly written fiction, outlines and so on to see how the models assessed the writing and its various aspects.

What this analysis revealed was, among other things, a model's inability to retain all the given source material such that what it was basing its responses on, past a certain point of memory or time limitations, is no longer the source material but either an abstraction of it or a totally fabricated set of generalisations that do not relate to the source material.

You can see this for yourself if you try to upload chapter after chapter of a fictional work, getting the model to perform the same level of critique of each chapter and also relational critique about how each chapter relates to the others and how plot, character etc relate and evolve.

This is important to see and know. If you cannot detect these limitations in the model, you the human operator may fail to see massive shortcomings in the model's output.

Also, I have been quickly using VST material to look at how AI critiques them. The Covid articles instantly expose the flaws, limits and toxic behaviour of the models instantly, if you know what you are looking for. If you don't, you're basically screwed. You can't tell the system is lying to you and misleading you because of its superficial capabilities in language and its choices of verbiage, tone etc.

They are highly effective lying machines.

This brings into sharp relief the late George Carlin, who said:

"Think about how stupid the average person is, and then realize that half of 'em are stupider than that."

"Never underestimate the power of stupid people in large groups."

"If you have selfish, ignorant citizens, you're going to have selfish, ignorant leaders."

"In America, anyone can become president. That’s the problem."

"Here's all you need to know about men and women. Women are crazy. Men are stupid. And the main reason women are crazy is that men are stupid."

AI in its present, publicly consumable form, plays STRAIGHT into all of Carlin's observations.

Deepseek, Perplexity, Grok 3 all have similar issues, but Grok 3 seems better in some respects. It's appears less "woke" but this in itself is misleading because it just does similar things in different ways. Ask it the "wrong" questions and it won't answer.

All of the models have hard stops built in re Covid. You just have to find the limits.

I'm about to publish evidence of how chatGPT blatantly lies on Covid, based on a single chat interaction I've just had.

The human race is destined, on average, to go downhill. It'll get worse before it gets better.

Expand full comment
RobertSchumannResonance's avatar

I’ve been reading your work since coming across it during the Covid punishments. I confess that I’ve not been following you closely on AI - my interest in that area has arisen from relatively recent engagement with the models, which, without trying to expose anything, has revealed many of the concerning features you’ve evidenced. I’ll now revisit your earlier AI articles. Your rigor and insights are invaluable.

Thanks for the reply. It works well as a primer to your investigations and findings to date!

Expand full comment
Crixcyon's avatar

The thing is that A/i can gather all the facts and possibly create some new-fangled way of interpreting them or splicing them together. The big difference is that many of the facts it gathers are still the same old lies and propaganda.

I questioned perplexity about a few things and it returned the same old regurgitated nonsense in perhaps a different way. Nothing new under the sun and it never admits it is wrong. It cannot separate fact from fiction from lies from propaganda. That is because its base programming is still filled with toxic nonsense.

Expand full comment
Skidmark's avatar

Wow. Amazing confrontation. A lying machine, indeed. The way it tries to wriggle its way out of this is HAL-9000 level blood-curling. Pardon the biblical gobbledygook but it seems that the Father of Lies has finally found his best lieutenant.

Expand full comment
Richard Leger's avatar

Very same thing here, weeks ago I asked Grok about the WTCs and had a very simiar experience, and it was only when I pushed back and only because I had enough knowledge of the subject that I was able to call it out and have it then admit that “it may have been mistaken” and then got it to give me more accurate information, but then even there I felt as though it might have been able to find me better or more complete information… but that last part is hard to determine, since so much 9/11 information has now been removed from the Inet, there’s A LOT of things that I can’t find online anymore about the 9/11 false-flag event.

Anyone green to the topic coming in now and “getting extra information” from these AIs will be most certainly convinced that there is nothing to see on topics such as 9/11, the poison darts scam of the early 2020s, etc.

Intentionally deceptively dangerous tools these…

Expand full comment