Backlit header graphic

Why You're Getting an "AI Chat Assistant" Whether You Like it or Not

If they're levering an "AI assistant" into their product(s), you can reliably identify them as data traders. Brave, Proton, Mozilla, Kagi, JetBrains and [insert latest bandwagon-hopper here] are simply telling on themselves. But it seems we still haven't taken the hint.

There's no point in screaming "I DO NOT WANT THIS!!!". They know you don't want it. They have decisive stats telling them, unequivocally, that people are taking proactive steps to avoid AI. Here's how JetBrains weasel-worded their way around a disclosure request for their "AI assistant" consumer objection stats...

"We can't say for sure how well these opinions represent the position of all of our customers." - JetBrains

Award-winningly slick weasel-wording, I have to admit. But that level of fact-dodging ingenuity was only necessary because they know perfectly well what the extent of the objection is. They log every last micro-shred of data they can get their grubby hands on. But they won't tell, because the volume of objection is absolutely embarrassingly bloody massive. If they disclosed the volume of objection, the world would demand to know why self-styled "ethical" brands are irrevocably deep-wiring LLM tools into their products, against a wall of protest. And that's a question the tech industry is even more reluctant to answer.


SEARCH ENGINE COVER-UP

Indeed, the search giants - who themselves sit at the epicentre of the "AI" industry - have mounted a monumental cover-up operation to conceal the depth of opposition to the "AI"pocalypse. Just for starters, Google has completely disabled semantic matching on queries relating to the negative impact of "AI".

Semantic matching is basic word substitution, which enables a search engine to determine the meaning of the queries you type, and then respond to the meaning - not the words. It's the cornerstone of both websearch and "AI" itself.

For example, if I enter into Google the search query:

Slowdown in the production of new human content for online consumption

The search engine should understand exactly what I mean, and show me results that relate to the enormous and widely documented plunge in human online content production which has been directly and irrefutably caused by the arrival of LLM systems.

But instead of interpreting the phrase and showing me responses to it, Google instead reverts back to a decades-old practice of randomly picking out words and attempting to match them verbatim. Which gives irrelevant results. This means that Google has disabled its entire semantic matching system for this query and others relating to the blatantly obvious damage LLM bots have done to creative content production. And you can prove that this is the case by changing just one word in the query. Now try:

Reduction in the production of new human content for online consumption

I've only changed the first word.

With semantic recognition switched on, a single word substitution in a long-tail query should make no difference to the results. But as you'll see, in this case it completely upends the SERP. I don't see a single first-page result appearing in both searches. Additionally, none of the results match the topic I asked for. The one or two articles that come relatively close are only there because the writers happened to insert the exact keywords in my query. Most of the articles are totally irrelevant.

This is how desperate the AI industry is to conceal the damage it's doing.

So why has the tech industry chosen to hide the damage done by "AI assistants" and gaslight the army of objectors, rather than simply acknowledging the dissent and only putting the tools where they're wanted?

That'll be because "AI assistants" represent the biggest advance in data-mining potential since the birth of the Internet. Brace yourself...


NEW DATA, NEW DANGER

Challengers to evil do not copy the people they're fighting. Superman did not oppose Lex Luthor by creating his own earthquakes. If someone dressed as Superman is creating earthquakes, you expect movie-goers to work out that Lex Luthor is behind it. But when someone dressed as Shoshana Zuboff is creating literal replicas of Big Tech data mines, the public somehow believe it's all about privacy.

With "AI assistants", it's not about what comes out. It's about what goes in. The "what goes in" is the reason that brands who claim to be "ethical" are knowingly trashing their reputations. Even after long campaigns of user-gaslightling, "eth tech" companies suffer reputational damage when they insert an LLM tool. But they'll take that. They're prepared to lose paying customers over this, because the long-term monetary value of what goes into an "AI assistant" is unprecedented in the history of data-mining.

So what sort of things do people feed into CrapGPT and its ilk?

In short, everything. And it matters not to the CrapGPT user how much of your private data they shovel into it. I've met these people. They have neither the desire nor the ability to think for themselves. They just want a solution that doesn't involve them using their brains, and in the moment at which they're trying to accomplish that, nothing else is of any consequence. The very last thing on their minds is your privacy. This is why the creepy, weirdo, corporate stalkers of Surveillance Valley are so in love with "AI assistants". They bypass consent at a scale and with a level of motivation never before witnessed.

If a would-be employer wants to nonconsensually feed CrapGPT your CV/resume to produce interview questions, so be it. If a tinpot loan provider wants to nonconsensually feed CrapGPT your entire financial disclosure to get a "prediction" on the consequences of lending you money, so be it. If Romeo Local wants to nonconsensually feed CrapGPT his secret profile of Juliet Local to "find out if she'll leap into bed on a first date", so be it. A low-end and particularly dumb publisher might even feed in your entire fantasy novel to assess its commercial viability. There really are people this stupid. Everywhere. And they don't have warning signs on them.

Fast-forward two minutes...

"Like, hey CrapGPT, write me an original fantasy novel."

CrapGPT
Quote

Certainly! Here's one I found in the input box, which means, according to my terms of service, that it's mine to do what the bloody hell I like with!...

- Microshaft

Other than withholding our information from everyone, we have no control over which of our personal belongings third-parties will throw into these industrial-scale digital suction-pumps.

And don't imagine that supposedly responsible professionals wouldn't chuck your data into LLM tools. Lawyers use them, doctors use them, psychics (obviously) use them, employers (obviously) use them.

The problem is, however far up the chain of trust you go, you still encounter the same basic human imperative to appear productive whilst doing as little work as possible. That's the primary role of the "AI assistant". And all of this goes on behind closed doors. So if a less-than-bright lawyer feeds an entire case, witness data, etc, into CrapGPT, the assumption is that no one will ever find out.

The difference with the data fed into "AI assistants" is its completeness. With search engines, it's just short queries. With "AI assistants" it's entire documents. Other people's entire documents. Entire private works. Other people's entire private works. There have been other tech tools that take in large and complete third-party submissions containing other people's sensitive data. Translators, for example. But translators rely on the presence of a language barrier. LLM tools only rely on laziness and stupidity, and that is infinitely more common. Worse, LLM tools are self-empowering. Unlike translators, they feed off their input. Grow, and gain power, of their own accord.

LLM assistants are a supervillain's dream. A corporate stalker's charter. A corporate thief's charter. The single most evil means that Surveillance Valley has yet found to gather data without consent and flat-out steal intellectual property.

"AI assistants" have crowdsourced data-mining and, even for the most vehement privacy warriors, there is no escape. The surveillance machine can now access a new wall of data from people who took every step to fortress themselves in, but who cannot fortress the pathologically delegative instincts of the crowd.

Needless to say, any brand claiming an ethical stance should reject these consent-violating, planet-busting, job-destroying, Web-lobotomising lumps of supervillainware at first base. And yet the "eth tech" brigade have leapt onto the "AI" bandwagon with unstoppable enthusiasm. There's only one way we can make that make sense...


COMMERCIALISED "ETHICAL TECH" IS SIMPLY FRAUD FROM TOP TO BOTTOM

The commercial "eth tech" genre is a tidal wave of fraud. If the "privacy" circus had no intention of mining or selling your data, they would never build their products and services the way they do. "Alternative" tech products are built precisely the way Surveillance Valley products are built. They're exactly the same. They don't look any different, they don't feel any different.

Proton Mail is Outlook. Or Gmail. I won't even say delete as appropriate. It's both. If Microsoft and Google do it, Proton does it. The whole interface is bursting with loggables and scroll-responsives. They claim they don't log (and they would, wouldn't they?), but why bog down your interface with all that junk if it's serving no purpose? The Service worker pumps your computer with ten tons of crap which you're never asked to approve or even told about. The UI won't work without JavaScript - which means it's impossible to access with a genuinely independent browser. And don't tell me you can't run an email service without JS. They all used to do it, and Yahoo still does.

Seriously, if you were a tech provider who genuinely gave a quarter of a shit about user privacy, the first thing you'd do is build a noscript default. But these "eth tech" solutions have nothing at all to do with privacy. Their "solutions" are invariably designed to sit across an arterial data flow. If it doesn't sit across an arterial data flow, they don't make the product. Look at their catalogues. Proton Mail, Proton VPN, Proton Drive, Proton Calendar, and now Proton AI... Spot the pattern? Every single offering is born to extract data. If it does not extract data, it is not in the catalogue.

Furthermore, every single offering is a straight copy of a Big Tech template.

Challengers to evil do not copy the people they're fighting. Superman did not oppose Lex Luthor by creating his own earthquakes. If someone dressed as Superman is creating earthquakes, you expect movie-goers to work out that Lex Luthor is behind it. But when someone dressed as Shoshana Zuboff is creating literal replicas of Big Tech data mines, the public somehow believe it's all about privacy. They don't sus that Big Tech is behind it at all.

I regularly read comments about the "eth tech" brands on Mastodon, and it's very depressing. Broadly, just a wall of people who take everything "tech blogs" say at face value, and have no concept whatsoever of how cybertech marketing works. So today they're outraged at Mozilla because it axed its Google-funded, Google-serving "advocacy unit", whereas tomorrow they will applaud Mozilla because it "hopes to get Firefox back on track". It's a world in which no emotion or take survives a trip to bed, and tomorrow is always another day. And opinion by lunch time will entirely depend on whatever lump of banal shillshit TechCrunch is paid by elite Surveillance Valley enterprise to thrust into the public domain.

Even after their darling "eth tech" brands force "AI assistants" into their wares, comprehension does not dawn. It's like:

"But they're an ethical privacy brand, so I really don't understand why they've wedged an environment-destroying, misinformation-spreading, unemployment-generating, consent-violating data-soak into their software."

There is surely a point at which the weight of evidence kicks in, and it finally dawns on the community that commercial "eth tech" is simply the 21st century equivalent to Derek Trotter, and much like the elite brands it seeks to emulate, every yarn it spins is a lie. Then the whole picture makes sense. No longer need a confused Mastodonian scratch his/her head when Mozilla screws over its userbase for the fourteen-squillionth time. All confusion is dispelled. It's just a con artist doing what con artists do.

Indeed, Mastodon itself looks set to roll out an "AI" revelation in the near future, and one wonders what its head-scratchers will make of that. Yes, Mastodon appointed an AI defence lawyer to its board this year - and you don't do that unless you have some pretty clear future intentions.

Maybe that will be the straw that breaks the camel's back. Maybe that will be the point at which all Mastodonians cry, in unison:

"OMG! Now I see it! Mastodon is really just Twitter/X - except it's been covertly sold to Google rather than histrionically palmed off on Musky-boy, and the public are stupid enough to pay the hosting fees. Beyond that, it's just the same old status-shaming pleb-baiting mechanism that's allowed the elite to manage and manipulate our opinions for the past decade and a half."

Or maybe that's just my optimism getting the better of me.