The TRUE Reason Why Google Won't Integrate ChatGPT
If anyone knows the value of information-laundering, it's Google. And you cannot launder information through a chatbot alone.
Prologue: frictionless query and response (a la ChatGPT) appears to almost guarantee the demise of Google Search in its current form. So why is Google refusing to integrate an advanced chatbot system into its flagship product?
To answer that question, we have to consider what happens if it does...
Early this month, Microsoft ended a period of speculation, announcing that it would integrate OpenAI's frictionless ChatGPT Q&A system into its search engine, Bing. Given Microsoft's existing connection with OpenAI, this came as no surprise. Google's reaction, however, did prove a little more perplexing for some. For now, Google will hold back on the integration of a rival system.
Publicly, the Big G's rationale is that AI-driven Q&A is not reliable, and merging it with a resource that the public expect to give them gospel truth could result in reputational damage to the corporation.
Okay, so it's certainly true that ChatGPT talks a lot of crap. And there probably is some hopeful dream at the back of Google's mind, in which ChatGPT blights Microsoft with a deluge of woefully bad publicity, reinforcing Google's position as the Good Samaritan of online discovery. But is that just a dream, or can Google see something Microsoft can't?
IMPLEMENTATION
Even within Bing, ChatGPT is not going to simply replace the existing websearch system. We know that the bot's computing costs are high, and that any business using it will need to minimise those costs whilst somehow retaining a revenue path. So we're really talking about Microsoft augmenting some search types with bot-generated summaries. For the sake of expediency and computing economy, a large number of summaries are likely to be saved, query-matched, and then served from a cache rather than "live" from the bot.
No one is going to produce content for a search engine that gives them no traffic and simply scrapes their work to feed an AI bot. No one is going to feed something that puts them out of business.
But more fundamentally, let's remember that search is a commercial, ad-monetised environment. Microsoft wants to make lots of money, and the full, personal assistant version of ChatGPT is way too helpful for that. So there may be a 'honeymoon' period, during which the bot operates at or near the extent of its capability - just to drive a buzz and kickstart an exodus from Google. But longer term, ChatGPT will have to be restrained in this use case. Ultimately, it won't give you for free what an advertiser will pay Microsoft to sell.
One of Microsoft's biggest fears will be that of inadvertently training the public NOT to click or tap links. This could prove catastrophic. Bot-generated answers produce no referral link, and can be self-sufficient enough not to need one.
The problem is, we, the public, are heavily habit-driven. If we get used to a linkless environment, we will stop clicking links. Which means we'll no longer click ads. The CTR (click-through rate) will thus plunge, and so will the search provider's revenue. Ad-monetised seach engines absolutely must maintain some semblance of click-through culture. Keep alive the habit of clicking through. Because once that habit dies, the search engine dies with it.
So things are already looking quite risky, but Bing is a search underdog with everything to gain and very little to lose. Google, meanwhile, has an almost monopoly-level market share in websearch, and a different set of dependencies.
THE TROUBLE WITH ARTIFICIAL INTELLIGENCE
Google's greatest dependency is the production of openly-accessible, human-made content. Or at least content that looks like it's human-made. This is a dependency that artificial intelligence cannot yet put to bed.
AI has brought with it a fascinating but dark irony. It needs humans to feed it, but it puts the humans who feed it out of business. Which means that humans will not willingly feed it. Up until 2022, the tech industry had the perfect workaround for this:
"Do not tell publishers that you're going to feed their work to AI routines which are so powerful that the consumer will no longer need to visit the publishers' domains."
But in 2022, the introduction of self-sufficient AI systems finally let the cat out of the bag, and we saw the dramatic consequences. Uproar, panic-gating of content, etc.
Let's not be under any illusions. Google loves AI. In Google's ideal vision of the future, all content is manufactured by bots, and the public mindlessly suck on a teat of automated display-unit-fodder and affiliate spam forever more. But Google knows that even if the public will actually tolerate bot-generated content, they must at least perceive it to be the work of a human. A human who, critically, DOES NOT WORK FOR GOOGLE. And that's down to an age-old property of the advertising business, best described as information-laundering.
The public will quickly conclude that bot-generated answers which don't meet their ideological expectations are calculated attempts, by the search provider, to corrupt democracy.
INFORMATION LAUNDERING
Information laundering is the process of proxying a message via a third party who is better trusted and/or liked by the public than the source of the message. The goal is to make it appear that the third party is the source of the message, and that the actual source had nothing to do with it. The entire advertising business is built on this principle.
When you watch a TV ad, the raw source of its sentiment (i.e. "buy this shit or else") is most likely a corporation. The corporation's "top talent" comprises a group of grasping thugs whose only interest is money and whose primary life-skills are intimidation and losing their temper.
These people could not sell a product on TV without frightening pensioners and being placed on an anti-social behaviour order. So they hire a marketing team to knock their original "buy this shit or else" message into rather more friendly shape.
The message may now even be discernibly creative, but the marketing team are not impartial and they look about as trustworthy as Mickey Pearce. They can't go on screen to deliver the message themselves. Thus, they hire actors to pose as satisfied customers, or hire a celebrity to issue a trusted endorsement, or hire an animator to make a cute little creature who wouldn't even know how to lie. The message is now proxied via a voice that the public perceives as 'clean' . The information has been laundered.
Television has strict rules about disclosure and an ad is always attributed to its source, despite the proxy creating a veneer of impartiality. But online, things get a lot more sneaky. Blogs present paid adverts as editorial content. Brands pay influencers a premium to shill their crap with zero disclosure. Tech companies manufacture NGOs to fence their propaganda. It's a free-for-all.
But elite cybertech has a particularly strong toolkit when it comes to laundering information. If Twitter suddenly wants you to believe that "chemtrails" are the number one threat to humanity, it doesn't need to hire anyone, bribe anyone or create a campaign. It can just start to algorithmically prioritise people who already perpetrate that message.
And because of the way the platform works, as soon as those people become highly visible, gain followers and gather credibility through popularity badging, other people will leap onto that publicity bandwagon whether they believe the ideology or not.
Some people will do or say anything for attention. Add in the fact that attention converts into money, and shilling a conspiracy theory can even become a career. Importantly, the message in this fictional scenario came from Twitter - not from the attention-chasers who jumped the cart. But the message was perceived by the wider public to be coming from the cart-hoppers. Twitter's message has been laundered through a public proxy.
Google has so much spyware and monetisationware on innumerable third-party websites that it could be said to have a bigger stake in those sites than the actual owners. Ceasing the flow of traffic to those sites in favour of a chatbot on one central domain would be literal self-destruction.
It's the same with search engines. No need to mount a campaign. Just re-orient the algo, and suddenly there's a surge in completely independent writers pumping conspiratorial pigwash onto the info superhighway at high velocity. We KNOW that SEO (search engine optimisation) marketers will write about - or get a bot to write about - whatever crap they know is ranking on search engines. The subject matter is immaterial to them. So whilst I wouldn't expect Google to be overwhelmed with a sudden desire to turn every SEO drone into David Icke, it has the power to do so if it wishes.
Which brings us to the things Google does want people to write. The... ahem, "reviews" (and particularly listicles) for shitty products that advertisers want to sell. Those "reviews" are everywhere, propelled by Google's ranking bias. Why do listicles rank so highly on Google? Because for one, they appear unbiased to the public - which is good information laundering. And for two, Google has a much better chance of serving a precisely relevant display ad on a "review" of twenty products than it does on a "review" of one.
But no one would write this crap if Google didn't rank it and send the publisher an absolute tidal wave of monetisable traffic. No one. When the creative writers' workshop asks you to choose your dream project, you do not commence a preliminary draft of The Forty-Five Best Cabbageless Diet Plans of January 2023 (With Pictures). They would send for a psychiatrist.
So what's really going on here is that Google is using the immense power of its referral potential to sculpt bloggers' behaviour. To drive them to produce the content that IT would produce if they were not there.
What Google is doing, is laundering information. Using third parties to present its own rhetoric. And if you've read up on modern SEO, you'll know just how focused this behavioural sculpting is. Google doesn't just rank things that serve its needs and then hope bloggers notice. SEO has an advisory wing, and the advisor-in-chief is Google. Google literally tells bloggers how to write, what to write, and what not to write. Presently, Microsoft can't do that, because Bing doesn't have the market share for the advice to matter.
BOT NO LAUNDER
The reason Google's display advertising system works is that it never looks to the public as though Google is disseminating the message. Even though Google absolutely is telling SEO-merchants what to produce and how to produce it, and then placing it where public will find it, the wider public see Google only as the impartial stepping stone that found what they asked it to find. Even though it no longer really does find what they ask it to find. The results are still just about relevant enough to the queries to convince most people that the search engine is trying its best.
If Google Search went full-on, zero-click ChatGPT, it would lose everything. Its revenue, its deep data, its propaganda mill, its reputation, its lobbying power...
But now let's replace all those third-party articles, listicles and other such nuggets of consumerist hell, with a frictionless chatbot on Google Search itself.
How is the information laundered?
It isn't. There's no longer any third party to act as Google's information-laundering stooge. No longer anyone for Google to blame when the public don't like what they find. And this gives a search engine problems beyond just convincing people to buy.
Internal, bot-generated search results squarely place the blame for inaccuracies on the search engine itself, and potentially invite accusations of foul play. What is to stop Google or Bing from programming an AI bot with wall to wall propaganda? Nothing. Thus, the public will quickly conclude that bot-generated answers which don't meet their ideological expectations are calculated attempts, by the search provder, to corrupt democracy.
Attributing the search results to linked sources protects the search provider from this assumption. If you're referred to Wikipedia via Google, and Wikpedia is wrong, Google can blame Wikipedia. And even though Google surfaced a flaky Wikipedia entry above the correct, expert information on a smaller site, people will generally accept that Wikipedia was the problem - not Google.
Bot-generated answers threaten to be deeply problematic within an ad-driven environment. If the answer is self-sufficient, it doesn't lead to a sale at all. But if the bot starts saying: "You can get a product to solve this at Amazon", the search provider stands accused of rigging the outcome, preferencing a monopoly, destroying small retailers, etc. It's much safer if the search engine refers the user to a third party, and they recommend Amazon.
To reiterate, search engines absolutely do rig their outcomes. But at present they do so in a way that implicates someone else. Like the affiliate marketer who wrote the top result. The tech blog that publishes Silicon Valley press releases verbatim. The NGO that masquerades as a public-interest group. That NGO is co-opted by Google and is therefore simply spinning Google's own yarn. But 99% plus of the public will not smell that rat. They trust the information because it's coming from a "nonprofit" NGO. Almost every message Google ever disseminates to the public goes via a third party. If anyone knows the value of information-laundering, it's Google. And you cannot launder information through a chatbot alone. A third-party site with a chatbot yes. But not a chatbot alone.
WHAT ELSE DOES GOOGLE HAVE TO LOSE?
Google relies on third-party content outlets to carry its display ads - its primary income source. But that's not all. It also relies on those same outlets for other, less obvious benefits, such as data collection.
Google has a data-gathering presence on most pages where content is published. And don't let's imagine the company can't exploit that data for profit outside of advertising. It can feed intelligence services and otherwise serve the authorities. This can be both financially lucrative and useful as leverage when lobbying for self-serving laws. It can backdoor bargain with other huge players, such as Facebook. It can monitor commercial threats and move to mitigate them before they become unmanageable. Knowledge is power. So lose the knowledge, and you lose the power.
And Google's deep knowledgebase does indeed collapse if the search engine adopts a zero-click results mechanism and stops sending traffic out to the Web. Google has so much spyware and monetisationware on innumerable third-party websites that it could be said to have a bigger stake in those sites than the actual owners. Ceasing the flow of traffic to those sites in favour of a chatbot on one central domain would be literal self-destruction.
The corporation's "top talent" comprises a group of grasping thugs whose only interest is money and whose primary life-skills are intimidation and losing their temper.
Additionally, without the prospect of search referrals, Google would lose publishers' goodwill, in the process losing its power to sculpt the Web. Most dramatically of all, it would shut itself out of what is still a vital ecosystem of openly-accessible, human-made content. No one is going to produce content for a search engine that gives them no traffic and simply scrapes their work to feed an AI bot. No one is going to feed something that puts them out of business.
Then there's the matter of propaganda. Google relies on content outlets to control its public image. And the corp is unique in the scale of information laundering that it uses to prop up its brand. It directly and financially co-opts a vast array of platforms, blogs, websites, media outlets, educational resources, NGOs and cybertech "nonprofits" to fill the visible landscape with Goog-approved messaging.
Most of these co-opts do not directly applaud Google. Some of them even criticise Google. Or, as I previously described it - indulge in a regular boo-hiss pantomime of deliberately impotent badmouthing as a means to conceal their allegiance. But collectively they get the vital messaging out - detached from Google itself and suitably laundered. Even if they're only pumping vapid nothingness into prime webspace, they're preventing that prime webspace from harbouring messages that would damage Google.
And this is before you even consider the myriad site admins who bite their tongue on Google's endless misdemeanours out of fear that the ad giant may retaliate with a monetisation ban.
If Google Search went full-on, zero-click ChatGPT, it would lose everything. Its revenue, its deep data, its propaganda mill, its reputation, its lobbying power... Just imagine all those cash-gagged and traffic-gagged parties suddenly having no more reason to be civil to Google. Imagine how the complexion of the Web would change if all those millions of fair weather friends - including the media and many of the biggest sites on the Internet - suddenly lost all reason to give Google an easy ride.
Even if Google only integrated the AI bot as an addendum, the reduction in referral traffic could quickly have a huge impact on publishers' goodwill. Make the advertising model less viable and drive much more of the Web behind paywalls, where it's useless to Goog. Stem the tide of new content, upon which all of Silicon Valley is relying to feed its AI bots for the foreseeable future at least. Start to breed bad publicity...
NO BRAINER
If Google wants to use ChatGPT, it has a choice. It can either pay to operate the AI mechanism on its own site, lose its priceless realm of information laundering, and get the blame for every duff result it serves.
Or it can use that same AI mechanism completely free of charge, via the huge array of external site admins that it already essentially controls, and which it knows will use ChatGPT to create their "content" anyway. Its information laundering regime remains intact, the public think they're getting human-made articles, and all the blame for ChatGPT's noob-ass floundering rests squarely with third parties. Hmmm, I wonder which of those options sounds better to Google at the current time?
Of course, if this were really about the reliability of ChatGPT, Google would be fighting to stop millions of site admins from adopting it. It's doing the opposite of that. It doesn't care, at all, that most of the sites it uses to launder its information will soon be populated by AI bots. So the truth is clear. This is not about reliability. This is about information laundering.