Since the trusted computing system produced its report on your computer using a sealed, separate processor that the user couldn't directly interact with, any malicious code you were infected with would not be able to forge this attestation.
64/
Since the trusted computing system produced its report on your computer using a sealed, separate processor that the user couldn't directly interact with, any malicious code you were infected with would not be able to forge this attestation.
64/
Seth proposed an answer to this: "owner override," a hardware switch that would allow you to force your computer to lie on your behalf, when that was beneficial to you, for example, by insisting that you were using Microsoft Word to open a document when you were really using Apple Pages:
https://web.archive.org/web/20021004125515/http://vitanuova.loyalty.org/2002-07-05.html
Seth wasn't naive. He knew that such a system could be exploited by scammers and used to harm users.
66/
Last year, Google toyed with the idea of adding remote attestation to web browsers, which would let services refuse to interact with you if they thought you were using an ad blocker:
https://pluralistic.net/2023/08/02/self-incrimination/#wei-bai-bai
The reasoning for this was incredible: by adding remote attestation to browsers, they'd be creating "feature parity" with apps.
68/
But Seth calculated - correctly! - that the risks of having a key to let yourself out of the walled garden were less than being stuck in a walled garden where some corporate executive got to decide whether and when you could leave.
Tech executives never stopped questing after a way to turn your user agent from a fiduciary into a traitor.
67/
That is, they'd be making it as practical for your browser to betray you as it is for your apps to do so (note that this is the same justification that the W3C gave for creating EME, the treacherous user agent in your browser - "streaming services won't allow you to access movies with your browser unless your browser is as enshittifiable and authoritarian as an app").
69/
This is broadly true of any system intended to control users at scale - beyond a certain point, additional security measures are trivially surmounted hurdles for dedicated bad actors and as nearly insurmountable hurdles for their victims:
https://pluralistic.net/2022/08/07/como-is-infosec/
58/
At this point, we've had a couple of decades' worth of experience with technological "walled gardens" in which corporate executives get to override their users' decisions about how the system should work, even when that means reaching into the users' own computer and compelling it to thwart the user's desire.
59/
What's more, the mere *possibility* that you *can* override the way the system works acts as a disciplining force on corporate executives, forcing them to reckon with your priorities even when these are counter to their shareholders' interests. If Facebook is genuinely worried that an "Unfollow Everything" script will break its servers, it can solve that by giving users an unfollow everything button of its own design.
61/
The record is inarguable: while companies often use those walls to lock bad guys *out* of the system, they also use the walls to lock their users *in*, so that they'll be easy pickings for the tech company that owns the system:
https://pluralistic.net/2023/02/05/battery-vampire/#drained
This is neatly predicted by enshittification's theory of constraints: when a company *can* override your choices, it will be irresistibly tempted to do so for its own benefit, and to your detriment.
60/
But so long as Facebook can sue anyone who makes an "Unfollow Everything" tool, they have no reason to give their users such a button, because it would give them more control over their Facebook experience, including the controls needed to use Facebook *less*.
62/
After all, non-scammers only rarely experience emergencies and thus have no opportunity to become practiced in navigating all the anti-fraud checks, while the fraudster gets to run through them several times per day, until they know them even better than the bank staff do.
57/
Now, it's true that every one of us makes lapses in judgment that put us, and people connected to us, at risk (my own parents gave their genome to the pseudoscience genetic surveillance company 23andme, which means they have *my* genome, too). A true information fiduciary shouldn't automatically deliver everything the user asks for. When the agent perceives that the user is about to put themselves in harm's way, it *should* throw up a roadblock and explain the risks.
51/
But the system should *also* let the user override it.
This is a contentious statement in information security circles. Users can be "socially engineered" (tricked), and even the most sophisticated users are vulnerable to this:
https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security
52/
This is absolutely true. As you read these words, all over the world, vulnerable people are being tricked into speaking the very specific set of directives that cause a suspicious bank-teller to authorize a transfer or cash withdrawal that will result in their life's savings being stolen by a scammer:
https://www.thecut.com/article/amazon-scam-call-ftc-arrest-warrants.html
54/
The only way to be *certain* a user won't be tricked into taking a course of action is to forbid that course of action *under any circumstances.* If there is *any* means by which a user can flip the "are you very sure?" circuit-breaker back on, then the user can be tricked into using that means.
53/
Beyond a certain point, making it harder for bank depositors to harm themselves creates a world in which people who *aren't* being scammed find it nearly impossible to draw out a lot of cash for an emergency and where scam artists know *exactly* how to manage the trick.
56/
We keep making it harder for bank customers to make large transfers, but so long as it is *possible* to make such a transfer, the scammers have the means, motive and opportunity to discover how the process works, and they will go on to trick their victims into invoking that process.
55/
When called on the absurdity of "protecting" users by spying on them against their will, they simply shake their heads and say, "You just can't understand the burdens of running a service with hundreds of millions or billions of users, and if I even tried to explain these issues to you, I would divulge secrets that I'm legally and ethically bound to keep.
49/
"And even if I *could* tell you, you wouldn't understand, because anyone who doesn't work for a Big Tech company is a naive dolt who can't be trusted to understand how the world works (much like our users)."
Not coincidentally, this is also literally the same argument the NSA makes in support of mass surveillance, and there's a very useful name for it: *scalesplaining*.
50/
This is literally the *identical* argument the NSA made in support of its "metadata" mass-surveillance program: "Reading your messages might violate your privacy, but *watching* your messages doesn't."
This is obvious nonsense, so its proponents need an equally obviously intellectually dishonest way to defend it.
48/
Chirp! is a social network. It runs on GNU social, version 2.0.1-beta0, available under the GNU Affero General Public License.
All Chirp! content and data are available under the Creative Commons Attribution 3.0 license.