Philosopher ‘pretenders to the throne’

This is a nice, short (7 min) introduction to Friedrich Hayek’s insights on emergent order. If you haven’t read Road to Serfdom (free downloads at the link), maybe this will nudge you to do so.

Order without intent: How spontaneous order built our world. from The IHS on Vimeo.

Allowing order without intent to flourish is how we might avoid the tyranny of good intentions.

Related, from Edward Snowden:

“The most unflattering thing is to realize just how naïve and credulous I was and how that could make me into a tool of systems that would use my skills for an act of global harm. The class of which I am a part of, the global technological community, was for the longest time apolitical. We have this history of thinking: “We’re going to make the world better.””

The idea that “making the world better” is apolitical shows Snowden is still naive and credulous. The toolmakers of the global technological community may have good intentions. They may be motivated by thoughts of the benefits they are bringing to humanity. They may also be motivated by profit and ideology.

How a better world is constituted, in any case, is an ethical and moral question beyond the ken of their meta-data, and in direct conflict with the ethical ‘principles’ demonstrated by their business models.

Who defines “better?” We have ample evidence Google/Facebook/Twitter aren’t up to the task.

“Making the world better” can be apolitical only in terms of each individual’s actions. It cannot be apolitical for giant corporations whose tools are designed to deceive users into acts of self harm: A system of fools.

Politics is the very essence of social media and the control of access to information.

Politics, noun. A strife of interests masquerading as a contest of principles. The conduct of public affairs for private advantage.
-Ambrose Bierce

And, in ways Bierce couldn’t imagine – conducting private affairs for public advantage. Affecting elections for example.

Snowdon’s NSA is simply the government instantiation of the Facebook/Google/Twitter business models. They are all dedicated to making their subjects “better.”

“The urge to save humanity is almost always a false front for the urge to rule.”
-H. L. Mencken

Order with intent is the model practiced by authoritarians for “your own good,” public or private, from de Blasio to Google.

So, I’ll close with some relevant Friedrich Hayek quotations on good intentions, control of information, collectivist ethics, and the limits of knowledge: All of which apply to government and to the massive private enterprises whose control of information and manipulation of public opinion Hayek couldn’t imagine:

“Everything which might cause doubt about the wisdom of the government or create discontent will be kept from the people. The basis of unfavorable comparisons with elsewhere, the knowledge of possible alternatives to the course actually taken, information which might suggest failure on the part of the government to live up to its promises or to take advantage of opportunities to improve conditions–all will be suppressed. There is consequently no field where the systematic control of information will not be practiced and uniformity of views not enforced.”

“Our freedom of choice in a competitive society rests on the fact that, if one person refuses to satisfy our wishes, we can turn to another. But if we face a monopolist we are at his absolute mercy. And an authority directing the whole economic system of the country would be the most powerful monopolist conceivable…it would have complete power to decide what we are to be given and on what terms. It would not only decide what commodities and services were to be available and in what quantities; it would be able to direct their distributions between persons to any degree it liked.”

“All political theories assume, of course, that most individuals are very ignorant. Those who plead for liberty differ from the rest in that they include among the ignorant themselves as well as the wisest. Compared with the totality of knowledge which is continually utilized in the evolution of a dynamic civilization, the difference between the knowledge that the wisest and that the most ignorant individual can deliberately employ is comparatively insignificant.”

“To act on behalf of a group seems to free people of many of the moral restraints which control their behaviour as individuals within the group.”

“The idea of social justice is that the state should treat different people unequally in order to make them equal.”

Network effect feudalism

This is the most important article I’ve read in 2019. Kudos to Allen Farrington.

I have struggled to make these points to others for a long time, and am generally viewed as a curmudgeon (charitably), or a paranoid fanatic (more typical) for my efforts. There is a bias toward the fallacious “I’ve got nothing to hide,” response, because the harm is unseen. People don’t yet grasp that they are not the ones to decide if they have something to hide*.

Farrington delineates the harm brilliantly.

Given that our rulers feel compelled to ‘do something’ about social media’s disdain for its peons, among whom they number, we can be sure government will make it worse and further entrench the incumbents.

Farrington’s comments on blockchains, free speech, Gab (of which I’m a long time member), Ethereum (which George Gilder referenced in a recent, related interview), and anti-trust are enlightening. The economic analysis is thought provoking. The political implications are consequential. A slice:

“It is not actually free,” [Facebook co-founder Chris] Hughes tells us, “and it certainly isn’t harmless.” But both [Hughes and Senator Elizabeth Warren] seem to believe that Facebook, Google and others succumb to the temptation to inflict such harm solely because they are big. Hence, the solution is to make them smaller. It doesn’t appear to have occurred to either of them that they are big because they inflict such harm.

Facebook and Google are not Standard Oil and AT&T. They operate business models whose network effects tend towards monopoly, due to continuous redeployment of increasing returns to scale. Users pay not with money but with data, which Facebook and Google then turn into productive capital that creates products for another group entirely. The quality of the service to the users—the unknowing and hence unrewarded capital providers—scales quadratically with the size of the network and, since they are free in monetary terms, any serious attempt to compete would require monumentally more capital than could ever generate a worthwhile return. The proper regulatory approach is not to cut off the heads of these hydras one at a time, but to acknowledge that these are fundamentally new economic entities.

Artificial intelligence makes this all the more imperative. By AI, I mean the honing of proprietary algorithms on enormous complexes of unwittingly generated data to identify patterns no human could—identifications that will be re-applied to dynamic pricing decisions and content filtering in order to make what will surely be called efficiency gains and improvements to the user experience. This would all be fine and dandy—as opposed to highly ethically suspect—if the contributors of the data had any idea of their own involvement, either in the contribution itself or in the eventual gain in efficiency. What is really happening here is that information that previously only existed transiently and socially will soon be turned into a kind of productive capital that will only have value in massive aggregations. This is why those who generate the data are happy to do so for free, for it is of no monetary value to them, and it is why the only people who will derive any productive value from it will be the already very well capitalized.

This is an unflattering, but perfectly accurate, description of the business models of Facebook and Google, who stalk you wherever you go on the web, wherever you bring your smartphone, and wherever you interact in any way with one of their trusted partners, all in an effort to manipulate your sensory environment and slip in as many ads as possible. This is so effective that they buy data from outside their platforms to supplement the potency of their manipulations…

[I]f something is free, it is difficult if not impossible to discern the kind of meaningful information that one might from a price in a market. The willingness to pay a price indicates a sincere belief and an honest commitment. There are costs to insincere or dishonest behaviour that will simply be dispersed throughout the network, rather than borne by the perpetrator.

It is not about the value of an individual’s data, “it is of no monetary value to them.

You are not just the product Google and Facebook sell; you are the enabling capital in a vast pyramid scheme.

How can we preserve our identity capital? How can we price our data? By making identity data scarce:

“Participants in the [redesigned] network are discouraged from being dishonest or insincere by the price and permanence of their scarce identity…

Several clearly desirable features immediately present themselves. For example, the issue of gatekeepers who exist for technical reasons assigning themselves political authority would evaporate…

So here’s my plea: stop using big tech and venture into the wild.”

Yes. The network effect can only be blunted if individuals stop enhancing it. Call it utopian, but boycotting Google and Facebook is something you control, and doesn’t depend on Senator Warren’s tender, collectivist mercies. Or, Facebook’s Social Justice agenda of the day.

“If a critical mass of users switches away from Google or Facebook, their collapse will be surprisingly quick. This is a very dramatic potential outcome, and I suspect it is more likely that, at a certain rate of user emigration, these companies, and others, will adapt their policies to be more free and open, so as to better compete in this new environment.”

The article is not a long read, but if you want to know what I’m talking about when I mention George Gilder, you’ll want to watch this 45 minute interview regarding Gilder’s book Life After Google. I wished for more Gilder and less interviewer at times, and more depth on some ideas, but for a general audience it’s not a bad look at Google, AI, blockchain, and other things related to Farrington’s post. A few gems from Gilder.

*

“The old cliché is often mocked though basically true: there’s no reason to worry about surveillance if you have nothing to hide. That mindset creates the incentive to be as compliant and inconspicuous as possible: those who think that way decide it’s in their best interests to provide authorities with as little reason as possible to care about them. That’s accomplished by never stepping out of line. Those willing to live their lives that way will be indifferent to the loss of privacy because they feel that they lose nothing from it. Above all else, that’s what a Surveillance State does: it breeds fear of doing anything out of the ordinary by creating a class of meek citizens who know they are being constantly watched.”

~ Glenn Greenwald

Thanks to the Internet of Things

Your trash disposal habits might now require a small EMP generator before you can safely throw away a lightbulb.

Recycling is definitely contraindicated without that EMP.  Or a 2 pound sledge (wear eye-protection).

The people scanning the conveyor belt to sort actual trash out of the recycling stream could quickly “monetize” burned out lightbulbs without even the bother of diving into a dumpster, and without any computer skills whatever.

Discarded smart lightbulbs reveal your wifi passwords, stored in the clear

I am quite sure this does not apply only to IoT lightbulbs.

The future is stupid, but not stupider than LIFX management. They sell you electronic security breachers so you can implant them yourself. Which would make you the stupidest.

The engineers at LIFX did not encrypt the RSA key on their “smart” lightbulbs, so an enterprising garbage collector who’d ‘learned to code’ could have root access to your home WiFi because you threw one away.

It isn’t believable that the engineers at LIFX failed to understand this problem.

Therefore, it wasn’t the engineers who decided to ship these Trojan Horses.

Therefore, protestation from LIFX that they’ve cleaned up their act is incredible.

That is, it is as credible as Google and Facebook when they claim they protect your privacy – even though selling it is how they prosper.

This is not to say LIFX planned to harvest your WiFi passwords.  It is to say they just didn’t give a shit.

I can’t wait until lightbulbs speak like HAL… I wonder if you can get HAL’s voice on Alexa or Google Home?

“Light?… Off.”
“Sorry, I can’t do that _your name here_.”

Sadly, most Millennials wouldn’t get the reference, not having seen 2001: A Space Odyessy. I’m sure they are installing these bulbs in their parent’s basements.

In China the government tracks your every move

Information Warfare: 1984 Becomes Real In 2024

In the United States, we just let Google and Facebook track us. With Twitter brownshirts and the Maim Scream Media™ as the enforcers.

On the whole, the Chicoms are likely fairer, and they’re certainly more circumspect.

See Mark Steyn: The Drumbeat of the Mob

and

Neo: The Covington chronicles: on hating the face of a teenage boy

I don’t much like Donald Trump, but, sorry, he’s not the problem.

Talk about toxic personalities and hate speech… you collectivists seriously need a privilege check.

Escaping the social media garrote

If you think Big Social Media is strangling free speech, or you’re just fed up with being the product, or you are realizing what privacy you’re giving up by using them, or you’re just tired of them lying about all of that; you might find this article of interest.

Ready to Get Off Facebook? Reason Reviews 5 Alternative Social Networks.

Crimestop

Is free speech under assault on college campuses? Well, some people, including President Trump, think not.

Most of those skeptics promote a distinction between free speech and “hate speech,” a term Mr. Zuckerberg has yet to define for us; but he’s working on an AI to apply the definition he comes up with: Once all those messy linguistic, contextual, semantic issues humans can’t even deal with are programmed.

That is, he dreams of automating enforcement of Silicon Valley values conforming to regulation he’s requested from our technology-naive and Constitutionally slipshod Congressional placeholders. They’ll be looking to erect an emanating penumbra since: No, there’s no “hate speech” exception to the First Amendment. They have to help Mr. Zuckerberg add one.

We can look to George Orwell for insight into how that public/pirate partnership is likely to work out.

“The mind should develop a blind spot whenever a dangerous thought presented itself. The process should be automatic, instinctive. Crimestop, they called it in Newspeak.”
-George Orwell, 1984

A model is already apparent. Google fired James Damore for failure to crimestop. On campus, they’re calling it “self-censorship.”
The Skeptics Are Wrong Part 2: Speech Culture on Campus is Changing

Click to enlarge.

Very Liberal students care far less about giving offense than about being judged. That is, they worry more about tribal membership in-good-standing and find it implausible their opinions would offend anyone. A collectivist approach.

Conservative students are much more concerned about campus thought police than Liberal students. I would have liked to see them less concerned about giving offense to peers as an indication of individualism, but they know they are surrounded by a great number of people who easily take offense. And they are probably just more polite.

I’m sure you can infer other interesting theories yourself, but the result is not good for any of these students: The Stifling Uniformity of Literary Theory

One wonders whether the students that the academy is producing today could if asked to, provide the arguments of their ideological or political counterparts, without resort to crude caricature or ad hominem…What might a course look like if a race theorist such as Derrick Bell was studied alongside someone like Thomas Sowell? For about thirty years both Bell and Sowell were consistently among the top five most cited black scholars in American Academia according to The Journal of Blacks in Higher Education.12 However, as with so many prominent intellectuals, while Sowell is revered among classical liberals, libertarians and conservatives, he is practically unheard of on the left, despite his pioneering work on the economics of race and ethnicity.13 To borrow Jonathan Haidt’s phrase, liberal intellectuals are in danger of being ‘blind’ not only to the other side’s moral taste buds, but also to their most important thinkers.14

Here’s another post I think helps explain why Liberals don’t like free discussion of ideas. They mean well, but can’t be bothered to examine consequences in their quest to perfect the rest of us.

Dumb f**ks

Mark Steyn had exactly the same reaction I did. I watched some bits of Zuck’s testimony on Fox News because I wanted to see the bland boy-face of evil and I wasn’t disappointed. He performed magnificently enough that my wife became pissed off at me for yelling at the TV. The snippet Steyn notes provoked my second loudest yell and an admonition to stop ranting.

On Fox, this bit came before the weaseling he did under examination from Ted Cruz, when I erupted with my loudest commentary. My wife changed the channel at that point.

Zuckerberg’s intentions are what he thinks makes him a misunderstood white hat. In his ignorant isolation he truly thinks his intentions are good: That is what makes him evil.

When he speaks about “protecting the “community”” he sneeringly arrogates moral superiority, and is too ignorant to even recognize it. When he speaks about “protecting the electoral process” he is saying “War is peace. Freedom is slavery. Ignorance is strength.” He had no concern about “protecting” the 2012 election, when his company actively aided Obama. So be it, as long we as define that as campaign “contribution in kind,” but stop with the maternalistic condescension.

And give up the moral preening that your mission is defining “hate speech.”

One senator who did understand the dangers ahead was Nebraska’s Ben Sasse. Earlier in the hearing, Zuckerberg had suggested that Facebook will eventually develop algorithms that will sniff out hate speech and be able to address it immediately. “Hate speech — I am optimistic that, over a five to ten-year period, we will have A.I. tools that can get into some of the nuances — the linguistic nuances of different types of content to be more accurate in flagging things for our systems.”

When Sasse’s turn to question Zuckerberg arrived, he asked a simple question: “Can you define hate speech?”

Zuckerberg said it would be hard to pin down a specific definition, and mentioned speech “calling for violence” as something Facebook does not tolerate.

Does anyone at Facebook understand the ramifications of a vague definition of hate speech? Does Zuckerberg think that the sometimes-violent opposition to any viewpoint that is even remotely conservative on college campuses happened in a vacuum?

He’ll be using Fahrenheit 451 as the instruction manual. And on that, Facebook stock rises. We are dumb f**ks.

He’s right about one thing

Facebook’s Zuckerberg faces reporters’ questions

“Facebook (NASDAQ:FB) has two basic questions to address in the Cambridge Analytica data leak scandal, CEO Mark Zuckerberg says on a conference call with media: Can it protect users, and can it make sure it’s not used to undermine democracy.”

The answer to those questions is no, and they aren’t the basic questions, even about the Cambridge Analytics problem he’s trying to pretend is the real issue. Cambridge Analytics is just one example in a long list of Facebook exploitations of its users. A more salient question is, “Can a business whose very basis is slight-of-hand betrayal of trust be trusted?” The answer to that question is also no.

The second question is simply absurd. First, speech intended to undermine democracy is a right those living in our Republic already possess. Second, Facebook, by its actions to help elect Barack Obama, invented the practice Zuckerberg decries. Cambridge Analytics didn’t even exist when Facebook demonstrated the concept.

“Zuckerberg also says that most users should assume that their publicly available information has been scraped; he’s referring here to those who enabled the ability for friends to search for them by phone number or email address.

“We’ve seen some scraping,” he says. “I would assume if you had that setting turned on that someone at some point has access to your public information in some way.””

He means, of course, “accessed by scrapers not employed by Facebook.” Facebook scraping also includes information not intended to be public, such as scanning all your Facebook Messenger content. And keeping all the videos you thought you had deleted. What does this tell us? That Facebook wasn’t concerned about protecting their most precious resources – your trust and your personal information – despite having been called out on it multiple times. Incompetence by design?

Well, “you” had the setting allowing scraping turned on: It’s your fault. Nothing to do with Facebook’s decisions about default settings buried under three menu layers. And if you didn’t “assume that [your] publicly available information” was going to be scraped then Zuckerberg’s right – it is your fault, because as he’s said, you’re a “dumb f**k.”

But that trick never works!

        This time for sure!
        -Bullwinkle J. Moose.

Facebook Quietly Begins Fact-Checking Political Photos and Videos

Facebook announced today that the company began fact-checking political photos and videos on Wednesday in an attempt to root out fake news. The company announced in a blog post that the changes come as a result of Facebook’s plan to review “ongoing election efforts.”

“By now, everyone knows the story: during the 2016 US election, foreign actors tried to undermine the integrity of the electoral process,” Guy Rosen, vice president of product management at Facebook, wrote. “Their attack included taking advantage of open online platforms — such as Facebook — to divide Americans, and to spread fear, uncertainty and doubt.” Rosen said although the clock cannot be turned back, “we are all responsible for making sure the same kind of attack [on] our democracy does not happen again.” He said Facebook is taking its role in the effort “very, very seriously.”

Mr. Rosen conveniently neglects to mention Facebook’s direct assistance to the Obama campaign in 2012, and ignores Facebook’s unethical psychological experimentation on its users. So, by “same kind of attack” he apparently excludes domestic actors, like Facebook and the Obama campaign, trying “to undermine the integrity of the electoral process,” when they do the same thing of which those pesky Russians are accused. He’s not alone among Facebook luminaries in his facile ethos.

Alex Stamos (Facebook chief security officer) had this to say:

Stamos singled out “organized, professional groups” whose motivation is money. “These cover the spectrum from private but ideologically motivated groups to full-time employees of state intelligence services,” he said. “Their targets might be foreign or domestic, and while much of the public discussion has been about countries trying to influence the debate abroad, we also must be on guard for domestic manipulation using some of the same techniques.”

Stamos apparently is as devoid of self reflection as he is deficient in sense of irony.

What is Facebook but an ““organized, professional group” whose motivation is money“; an “ideologically motivated“, “domestic manipulation [clique] using some of the same techniques“? Explicitly including what Mr. Rosen called spreading “fear, uncertainty and doubt“, which they euphemized as “mood manipulation” when they did it.

Further reading, or piling on:

Facebook — even as it apologizes for scandal — funds campaign to block a California data-privacy measure

Facebook scraped call, text message data for years from Android phones

Promises, promises: Facebook’s history with privacy

Former Facebook Workers: We Routinely Suppressed Conservative News

Google’s new motto…

Don’t be Facebook.
At Google, of course, that would not mean “respect user’s privacy.” It would mean “don’t get caught.”

I see Facebook CEO Mark Zuckerberg is reacting to his company’s poor user-data stewardship by inviting regulation. Not regulation of his company; he’s asking for political advertising to be regulated.

“Actually, I’m not sure we shouldn’t be regulated,” Zuckerberg said in an interview with CNN that represented some of his first public remarks since the Cambridge Analytica controversy plunged his company into crisis and led to calls for his testimony before Congress.

“I actually think the question is more ‘What is the right regulation?’ rather than ‘Yes or no, should it be regulated?’” Zuckerberg told CNN.

The Facebook CEO said that “he would love to see” new transparency regulations for political advertisements. Facebook has been criticized for a lack of transparency.

OK, Mr. Zuckerberg, I’ll take a shot at “What is the right regulation?”

First, it’s not about political advertising. You’re looking to make government regulation a CYA for Facebook: “Look, we followed the regulations!” You’re asking to “consult” with government on how political advertising should be constrained. Foxes. Henhouse. Plus a helping of partisanship and financial self-interest.

Advertising isn’t the problem. The problem is your business model and its intentional lack of honesty.

The regulation of Facebook, Google, Amazon, Twitter, Apple, etc. should start from the premise that users own their identity data, including when it’s aggregated. This enables micro-payments to those whose data is aggregated, each time it is accessed or updated. Basically, an identity copyright law. You’re using my identity, you have to pay me.

Defining ownership of the data as the individual’s would require absolute positive opt-in – data can’t be sold without payment and unless specific permission is given. Big Data like their interminable click-through contracts; they love changing the terms of service at will; they love hiding the opt-out buttons. We need these contracts re-written. One thing would happen for sure; the mandatory opt-in buttons would be prominent and they would list the payment to be gained.

Granting ownership of users’ data to users also encourages companies who gather and store it to be careful with it as a fiduciary duty. CEO Zuckerberg appears to agree that that is a good idea.

On Wednesday afternoon, Zuckerberg published a post promising to audit and restrict developer access to user data, “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you.”

He’s right, Facebook doesn’t deserve to serve you for exactly the reason he gave. The word “serve” in that sentence can be interpreted in two very different ways. Zuckerberg is only too happy to “serve” you to advertisers. This attitude is long standing, as noted by the New Yorker in 2010

In [an] exchange leaked to Silicon Alley Insider, Zuckerberg explained to a friend that his control of Facebook gave him access to any information he wanted on any Harvard student:

Zuck: yea so if you ever need info about anyone at harvard

Zuck: just ask

Zuck: i have over 4000 emails, pictures, addresses, sns

Friend: what!? how’d you manage that one?

Zuck: people just submitted it

Zuck: i don’t know why

Zuck: they “trust me”

Zuck: dumb f*cks

Indeed.

While Zuckerberg claims he’s matured since that exchange, “if you ever need any information” nonetheless remains the raison d’être of Facebook. Zuckerberg went on to say, “I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again.” Well, since privacy violations and sleazy ethical conduct just keep happening, he must be a slow learner.

In 2006 Facebook’s introduction of “News Feed” made information public that users had intended to keep private. In 2009, Facebook made posts public by default, when they had been private, again by simply changing its ToS. That attracted the attention of the U.S. Federal Trade Commission. In 2011, Facebook was caught tracking you with its cookies even after you had logged out. Zuckerberg is worried about regulating advertising, but Facebook had no problem in 2013 with the posting of beheading videos. In 2014, the company was forced to acknowledge that it had conducted a psychology experiment intended to manipulate users’ emotions.

The current angst over Cambridge Analytics should be directed at Facebook business practices. The same thing happened in 2012 with the Obama campaign – except with Facebook’s active participation. At the time this was considered a clever advertising use of social media by the Democrats.

So, suddenly, 6 years later, Zuckerberg wants political advertising regulated? You know he made the offer because his lobbyists would write the legislation. It’ll turn into a barrier to competition while likely eroding freedom of speech.

Facebook has repeatedly violated agreements with users, changed ToS without warning, hidden privacy controls deep within users’ profiles, made and allowed unethical use of its data, and directly participated in targeting election advertising. Maybe they’d be more careful, ethical and transparent if you owned the data.

A final word from Zuckerberg:

The real question for me is, do people have the tools that they need in order to make those decisions well? And I think that it’s actually really important that Facebook continually makes it easier and easier to make those decisions… If people feel like they don’t have control over how they’re sharing things, then we’re failing them.

Only one way to fix that. Give them control.

Further reading on owning your own identity:
Who owns your identity?
Google’s Alphabet, “A” is for amoral