Discussion about this post

User's avatar
Crixcyon's avatar

The thing is that A/i can gather all the facts and possibly create some new-fangled way of interpreting them or splicing them together. The big difference is that many of the facts it gathers are still the same old lies and propaganda.

I questioned perplexity about a few things and it returned the same old regurgitated nonsense in perhaps a different way. Nothing new under the sun and it never admits it is wrong. It cannot separate fact from fiction from lies from propaganda. That is because its base programming is still filled with toxic nonsense.

Expand full comment
RobertSchumannResonance's avatar

I’m sure that many of your readers have had similar confrontations with AI models in their areas of subject matter expertise.

The degree of embedded bias in handling information sources is a flaw that, uncorrected, will render AI as useless for objective purposes as web search and all controlled media have now become. AI’s pervasive integration into digital distribution channels will only reinforce propagandistic narratives upon the credulous.

Recent experience has given us three notable instances - Covid, Ukraine, and Gaza - where official and self-interested entities have aggressively pushed error and disinformation as incontrovertible fact. In each instance, large numbers of human deaths have arisen and been ignored or morally excused on the basis of narratives grounded in false information that the system will not permit to be challenged.

Is AI not appearing to be a black box constructed on very similar principles of information management to those that have caused so much real world disaster? How many more human deaths might arise in future from AI’s propagation and reinforcement of false beliefs?

Some might argue that source bias is unavoidable in real life - we are constrained to assign authority, and thus implicit reliability, to sources of information in areas where our subject matter expertise is partial or limited. When this produces harmful outcomes, it’s an unintended bug rather than a design feature. Where we see this in generative AI, the same applies - these are teething troubles that are correctable with time.

It’s difficult, however, to avoid the uncomfortable sense of similarity between generative AI models and the tendentious selectivity and manipulative handling of reality indulged in by politics, media, and the security services. Those of a skeptical cast of mind can wonder whether these entities might have had behind the scenes involvement in formulating and realizing approaches to some AI models, overriding objective computer science, epistemology, and ethics.

Incidentally, I wonder how this line of Socratic inquiry would have proceeded with DeepSeek or with one of Andrew Torba’s models …

Expand full comment
5 more comments...

No posts