The Weaponisation of Web Standards
Unrestrained by web standards arbiters, cybertech giants use over-complexity as a weapon, creating lock-ins for consumers, barriers to entry for all but the very richest competitors, and a global data checkpoint around which no one can realistically build a bypass. And the brainwashing has been strong. Most of us consider it totally normal when cyber giants turn the things we wrote, into things we can't even read...
Are you a long-time publisher of online content?...
What do you do when you have a raft of old, disused material strewn across various platforms, and you want to bring it under your control? To pool it. To assess it. To cherry-pick it for use in future projects. To make it genuinely portable. Or just to meaningfully possess something that is, after all, yours...
The answer is pretty simple. Download it, store it to your own computer and preserve it as a local archive. Pretty simple, that is, until you try doing it...
GRABWARE: ENTRANCE WIDE OPEN, EXIT WIDE SHUT
Over the past few years, Suveillance Valley's gatekeeping of content has become noticeably more aggressive. That's saying something, given that even a decade ago Big Tech's gatekeeping was measurably fierce. If you fancied downloading your WordPress.com blog back then, you found your way to the export function, made your selections, hit the download button, and the download began.
What Mozilla tells the public, and what Mozilla actually does in private, are two entirely different things. Publicly: grandstanded advocacy for decentralisation, fist in the air. Privately: crept off to the W3C with a formal objection against it.
All you got was an XML file, which was of no use whatsoever to any non-techie unless they ran straight back to some other blob of WordPress grabware and re-uploaded it. But at least the download button started a download.
Today, the download button just sends you a totally unnecessary email, with a separate link, which you then have to run back to WordPress.com and enter (not that they see you as a lab-rat or anything). At which point the system decides whether or not you're allowed to receive your own work - AS A VIRTUALLY ILLEGIBLE WALL OF INTERMIXED CODE AND TEXT SALAD.
And even if you can fathom the XML dump, you get no portability. The download contains no media. That stays anchored on WordPress.com, so even after you "move", you're still chained in. You're still prone to losing work if the original platform should fold. Indeed, the file doesn't even contain all of the text-formatting information. For example, if you want to preserve the paragraph divisions for Web display outside of WordPress, you'll have to run each post through a Markdown routine. And WordPress itself acknowledges in a comment inside the XML...
"This file is not intended to serve as a complete backup of your site."
Oh. Oh right. See, I wasn't expecting that, because the process itself is clearly labelled as a site "export", and specifically provides a selection for, quote:
"All content"
Only late stage capitalism could reconcile the terms "all" and "not complete". But without getting into an argument about semantics, suffice it to say that in summary, the surveillance machine is not letting you download your site or blog. It's pretending to.
The platform hopes we won't understand the XML file, or indeed ever dare to open it - and we'll thus remain unaware that it isn't actually an export at all. The platform hopes we'll blame our "behind-the-curve understanding of technology" for what is really a deliberate obstruction of our access to the work we uploaded.
Okay, so what about doing things the hard way? Maybe you can just go page by page, downloading each individual component of your WordPress blog one after the other? And store it all in a folder? Well, you can, if you have time. And in the past that did in fact work.
But more recently, WP has sabotaged page downloads with completely needless crossorigin attributes on key page dependencies. This means that after download, the pages won't display properly on your local drive unless you re-code them. The pages are also pumped full of surveillanceware that you were never told about, so even if you do wipe out the crossorigin block, the downloaded versions will still try to make calls to Big Brother every time you open them. Even if they're on your own computer.
For the creator, the overwhelmingly likely resort is: give in and leave all the content siloed on the platform(s). I did this myself for years, before Python and RegEx gave me a route to liberation. The problem is, most people don't have the time or will power to learn Python and RegEx. I only did so out of absolute desperation to reclaim a pretty large amount of dormant content.
It's widely recognised that online platforms aim to lock the public in, but few people realise that most of the barriers are built from over-complexity rather than padlocked doors.
INTELLECTUAL PROPERTY AS A HOSTAGE
These sharp practices have now become so aggressive that there's actually a phrase commonly used to describe them. They're collectively referenced as "holding content hostage", and we've come to accept the tactics as part of online life. In fact, we're now so used to having our own content used as a tool to prevent us from closing online accounts, that people are actually predicting the length of time it will take for newer Silicon Valley platforms to start taking content hostage, as older Silicon Valley platforms have already done.
True, the wider public have begun to notice the ploy more since Hell-holes like Medium took a far more obvious approach and began physically walling public access to content from unpaid authors.
That's taking the hostage concept to another level. Blocking the creator's own audience from accessing the work - and doing so with a padlock rather than a dramatically over-complicated retrieval process. And richer still, doing so as a means to capture the audience, drive them away from the creator who served them for free, and push them towards the platform's elite creators, who are essentially staff writers for a subscription-based online magazine. Outside of that elite, Medium's creators are seen purely as free publicity workers.
So it's not hard to see why the general public are now waking up to the idea of intellectual property as a hostage. But it remains the case that vastly more intellectual property has been imprisoned using over-complexity than has been imprisoned using hard locks.
One of the best ways to understand just how unnecessarily over-complicated the various platforms' "content archive" formats are, is to download an archive from a platform that doesn't hold content hostage. And you've come to the right place. If you download your website here on Neocities, something unusual happens... You actually get your website. Nothing more, nothing less. You get exactly what you added, as you added it. No painstakingly gobbledygooked walls of text. No tripwires. No injected surveillanceware. Just a folder. And in the folder is your site. All of it. 100% portable.
This really highlights how easy migration can be, and consequently, how powerful a force over-complexity is in blocking creators from reclaiming their own property.
WordPress does, technically, make your website accessible for download. It just does so in a format it knows most people can't directly work with, and then it chops up the process to make it prohibitively time-consuming. The complexity of the process veils the fact that it's not realistically doable for the average person. Google's Blogger platform does the same thing, but uses its own, separate format for the XML file, which means you can't even upload, say, a WordPress "export" to Blogger.
So it's just the tech industry exploiting incompatibility to stifle consumers' basic rights to their own property, then?
Yes. Exactly.
[Lieutenant Columbo voice]: "There's just one thing... The formatting of digital files is a web standard, right?... It's just that I was thinking: we have 'web standards' consortia to ensure that the Web is always universally compatible and easy to navigate. And I noticed that where it benefits corporations, these consortia are actually very powerful in asserting a standard. But where standardization could protect consumers, and prevent monopoly, it seems everything is allowed to be as chaotic, incompatible and over-complicated as the existing cybertech powers desire..."
"WEB STANDARDS" ARE DOUBLE STANDARDS
Yes, Lieutenant. I knew we could rely on you to dig out the hidden anomaly in all this. "Web standards" are double standards. We have a two-tier Internet, in which any tech provider is allowed to obfuscate, over-complicate, mispurpose, de-standardise and even literally break anything they wish, as long as it disadvantages consumers and potential new market entrants, and not the established cybertech industry. AKA Silicon Valley.
The World Wide Web is now so objectionably broken that most of it cannot be used at all without totally arbitrary programs, which only an access tool provided by a richer-than-government Surveillance Valley monopolist is able to interpret. "Web standards" are not a standardisation of the Web. They've achieved the exact opposite. They've allowed the Web to become so riddled with needless complexity, randomness, unpredictability and bloat, that it is now virtually impossible for anyone other than the world's very, very richest corporations to enter the market.
"Web standards" are being used as a weapon by technology companies - who either proxy or directly pay bribes to "web standards" consortia to pressure them on policy. And let's not fall into the trap of describing these palm-greasing exercises as "donations" or "support". They're bribes. The intention behind them is to influence decisions.
And often, the consortia are formed from the tech companies themselves. Look at this bullshit! Apple, Google and Microsoft clubbing together to define the standards for browser extensions. Incidentally, if you see Mozilla mentioned in the linked post and wonder why I omitted its name, it's because it would just be duplication. Mozilla is Google by proxy. Mozilla Firefox's current purpose is not to compete with Google Chrome. It's to prevent other browsers from competing with Google Chrome.
And you can see in this article that what Mozilla tells the public, and what Mozilla actually does in private, are two entirely different things. Publicly: grandstanded advocacy for decentralisation, fist in the air. Privately: crept off to the W3C with a formal objection against it. Alongside Google, obviously. And Apple. Bear that in mind the next time Mozilla serves you a portion of virtue-signalling on a silver plate. None of it is real. An organisation 90% funded by Google does not have virtues.
But to get back to the plot, this kind of Surveillance Valley wrangling, lobbying and protesting around "web standards" is relentless. Meanwhile, the general public have no say in it at all. Ask the average person what the W3C is, and they won't have a clue. How can they have a say in something they don't even know exists?
They don't. "Web standards" are dictated by the tech industry, FOR the tech industry.
INDUSTRY-FACING: GOOD, PUBLIC-FACING: BAD
The dire lack of consistency between industry-facing "web standards" and public-facing "web standards" shows us that however well-intentioned Tim Berners-Lee's original brief for the concept was, it has failed to serve anyone but powerful corporations. Indeed, it has been weaponised by powerful corporations as a rubber stamp for their evil plans. As tacit absolution from culpability.
Tech companies know that if they can get a "web standard" past the W3C, then however Machiavellian or evil the result, they can always tell the public: "We were only observing official web standards". The fact that those tech companies stamped their feet, formed a gang, made threats and then squealed like pigs until they won those "standards" will not be televised. And if we, the public, ever figure in a "web standards" policy at all, it's only because one corporation used "the public interest" as a beating stick to get its own way over another.
Survey after survey has shown that the public are made to feel helpless by the corporate stalkers of Surveillance Valley. Made to feel powerless to escape them. This is the polar opposite of the completely decentralised, competitive, gatekeeperless haven of unfettered innovation that the World Wide Web originally was. The unshakeable monopoly which has since bedded itself in has been the result of a compound and intricate process. But it could not have happened if the people tasked with implementing and maintaining the standards had applied the same rigorous evaluation processes to public-facing protocols as they did to industry-facing protocols.
Unrestrained by "web standards" arbiters, cybertech giants use needless over-complexity as a weapon, creating lock-ins for consumers, barriers to entry for all but the very richest competitors, and a global data checkpoint around which no one can realistically build a bypass. Whatever a standards arbiter is meant to achieve, this is definitely not it.