Philosopher ‘pretenders to the throne’

This is a nice, short (7 min) introduction to Friedrich Hayek’s insights on emergent order. If you haven’t read Road to Serfdom (free downloads at the link), maybe this will nudge you to do so.

Order without intent: How spontaneous order built our world. from The IHS on Vimeo.

Allowing order without intent to flourish is how we might avoid the tyranny of good intentions.

Related, from Edward Snowden:

“The most unflattering thing is to realize just how naïve and credulous I was and how that could make me into a tool of systems that would use my skills for an act of global harm. The class of which I am a part of, the global technological community, was for the longest time apolitical. We have this history of thinking: “We’re going to make the world better.””

The idea that “making the world better” is apolitical shows Snowden is still naive and credulous. The toolmakers of the global technological community may have good intentions. They may be motivated by thoughts of the benefits they are bringing to humanity. They may also be motivated by profit and ideology.

How a better world is constituted, in any case, is an ethical and moral question beyond the ken of their meta-data, and in direct conflict with the ethical ‘principles’ demonstrated by their business models.

Who defines “better?” We have ample evidence Google/Facebook/Twitter aren’t up to the task.

“Making the world better” can be apolitical only in terms of each individual’s actions. It cannot be apolitical for giant corporations whose tools are designed to deceive users into acts of self harm: A system of fools.

Politics is the very essence of social media and the control of access to information.

Politics, noun. A strife of interests masquerading as a contest of principles. The conduct of public affairs for private advantage.
-Ambrose Bierce

And, in ways Bierce couldn’t imagine – conducting private affairs for public advantage. Affecting elections for example.

Snowdon’s NSA is simply the government instantiation of the Facebook/Google/Twitter business models. They are all dedicated to making their subjects “better.”

“The urge to save humanity is almost always a false front for the urge to rule.”
-H. L. Mencken

Order with intent is the model practiced by authoritarians for “your own good,” public or private, from de Blasio to Google.

So, I’ll close with some relevant Friedrich Hayek quotations on good intentions, control of information, collectivist ethics, and the limits of knowledge: All of which apply to government and to the massive private enterprises whose control of information and manipulation of public opinion Hayek couldn’t imagine:

“Everything which might cause doubt about the wisdom of the government or create discontent will be kept from the people. The basis of unfavorable comparisons with elsewhere, the knowledge of possible alternatives to the course actually taken, information which might suggest failure on the part of the government to live up to its promises or to take advantage of opportunities to improve conditions–all will be suppressed. There is consequently no field where the systematic control of information will not be practiced and uniformity of views not enforced.”

“Our freedom of choice in a competitive society rests on the fact that, if one person refuses to satisfy our wishes, we can turn to another. But if we face a monopolist we are at his absolute mercy. And an authority directing the whole economic system of the country would be the most powerful monopolist conceivable…it would have complete power to decide what we are to be given and on what terms. It would not only decide what commodities and services were to be available and in what quantities; it would be able to direct their distributions between persons to any degree it liked.”

“All political theories assume, of course, that most individuals are very ignorant. Those who plead for liberty differ from the rest in that they include among the ignorant themselves as well as the wisest. Compared with the totality of knowledge which is continually utilized in the evolution of a dynamic civilization, the difference between the knowledge that the wisest and that the most ignorant individual can deliberately employ is comparatively insignificant.”

“To act on behalf of a group seems to free people of many of the moral restraints which control their behaviour as individuals within the group.”

“The idea of social justice is that the state should treat different people unequally in order to make them equal.”

Network effect feudalism

This is the most important article I’ve read in 2019. Kudos to Allen Farrington.

I have struggled to make these points to others for a long time, and am generally viewed as a curmudgeon (charitably), or a paranoid fanatic (more typical) for my efforts. There is a bias toward the fallacious “I’ve got nothing to hide,” response, because the harm is unseen. People don’t yet grasp that they are not the ones to decide if they have something to hide*.

Farrington delineates the harm brilliantly.

Given that our rulers feel compelled to ‘do something’ about social media’s disdain for its peons, among whom they number, we can be sure government will make it worse and further entrench the incumbents.

Farrington’s comments on blockchains, free speech, Gab (of which I’m a long time member), Ethereum (which George Gilder referenced in a recent, related interview), and anti-trust are enlightening. The economic analysis is thought provoking. The political implications are consequential. A slice:

“It is not actually free,” [Facebook co-founder Chris] Hughes tells us, “and it certainly isn’t harmless.” But both [Hughes and Senator Elizabeth Warren] seem to believe that Facebook, Google and others succumb to the temptation to inflict such harm solely because they are big. Hence, the solution is to make them smaller. It doesn’t appear to have occurred to either of them that they are big because they inflict such harm.

Facebook and Google are not Standard Oil and AT&T. They operate business models whose network effects tend towards monopoly, due to continuous redeployment of increasing returns to scale. Users pay not with money but with data, which Facebook and Google then turn into productive capital that creates products for another group entirely. The quality of the service to the users—the unknowing and hence unrewarded capital providers—scales quadratically with the size of the network and, since they are free in monetary terms, any serious attempt to compete would require monumentally more capital than could ever generate a worthwhile return. The proper regulatory approach is not to cut off the heads of these hydras one at a time, but to acknowledge that these are fundamentally new economic entities.

Artificial intelligence makes this all the more imperative. By AI, I mean the honing of proprietary algorithms on enormous complexes of unwittingly generated data to identify patterns no human could—identifications that will be re-applied to dynamic pricing decisions and content filtering in order to make what will surely be called efficiency gains and improvements to the user experience. This would all be fine and dandy—as opposed to highly ethically suspect—if the contributors of the data had any idea of their own involvement, either in the contribution itself or in the eventual gain in efficiency. What is really happening here is that information that previously only existed transiently and socially will soon be turned into a kind of productive capital that will only have value in massive aggregations. This is why those who generate the data are happy to do so for free, for it is of no monetary value to them, and it is why the only people who will derive any productive value from it will be the already very well capitalized.

This is an unflattering, but perfectly accurate, description of the business models of Facebook and Google, who stalk you wherever you go on the web, wherever you bring your smartphone, and wherever you interact in any way with one of their trusted partners, all in an effort to manipulate your sensory environment and slip in as many ads as possible. This is so effective that they buy data from outside their platforms to supplement the potency of their manipulations…

[I]f something is free, it is difficult if not impossible to discern the kind of meaningful information that one might from a price in a market. The willingness to pay a price indicates a sincere belief and an honest commitment. There are costs to insincere or dishonest behaviour that will simply be dispersed throughout the network, rather than borne by the perpetrator.

It is not about the value of an individual’s data, “it is of no monetary value to them.

You are not just the product Google and Facebook sell; you are the enabling capital in a vast pyramid scheme.

How can we preserve our identity capital? How can we price our data? By making identity data scarce:

“Participants in the [redesigned] network are discouraged from being dishonest or insincere by the price and permanence of their scarce identity…

Several clearly desirable features immediately present themselves. For example, the issue of gatekeepers who exist for technical reasons assigning themselves political authority would evaporate…

So here’s my plea: stop using big tech and venture into the wild.”

Yes. The network effect can only be blunted if individuals stop enhancing it. Call it utopian, but boycotting Google and Facebook is something you control, and doesn’t depend on Senator Warren’s tender, collectivist mercies. Or, Facebook’s Social Justice agenda of the day.

“If a critical mass of users switches away from Google or Facebook, their collapse will be surprisingly quick. This is a very dramatic potential outcome, and I suspect it is more likely that, at a certain rate of user emigration, these companies, and others, will adapt their policies to be more free and open, so as to better compete in this new environment.”

The article is not a long read, but if you want to know what I’m talking about when I mention George Gilder, you’ll want to watch this 45 minute interview regarding Gilder’s book Life After Google. I wished for more Gilder and less interviewer at times, and more depth on some ideas, but for a general audience it’s not a bad look at Google, AI, blockchain, and other things related to Farrington’s post. A few gems from Gilder.


“The old cliché is often mocked though basically true: there’s no reason to worry about surveillance if you have nothing to hide. That mindset creates the incentive to be as compliant and inconspicuous as possible: those who think that way decide it’s in their best interests to provide authorities with as little reason as possible to care about them. That’s accomplished by never stepping out of line. Those willing to live their lives that way will be indifferent to the loss of privacy because they feel that they lose nothing from it. Above all else, that’s what a Surveillance State does: it breeds fear of doing anything out of the ordinary by creating a class of meek citizens who know they are being constantly watched.”

~ Glenn Greenwald


Internet safety notes modeled after advice to some friends, most of whom are aware of my IT paranoia.  You may find it useful, or not. 

Presently, I’m using Firefox because Apple updated Safari, permanently breaking 3 of 4 add-ons I considered very important to safe browsing.  I checked out some other browsers (Brave, Opera…) because I didn’t really want to go back to Firefox after they trashed their CEO several years ago for a campaign contribution.  I went back to Firefox anyway because it offered add-ons that met my needs.  My configuration is described below:

First, I use the built in Firefox blocking (trackers, 3rd party cookies, cryptominers and fingerprinters) and set “delete all cookies and site data upon closing Firefox” to “yes.”  Also, delete all history upon exit.  I set the location, camera, microphone and notifications permissions to my satisfaction.  Call it “Hell, no!”. 

I block pop-up windows, I get warned if a website tries to install an add-on, deceptive content is blocked (I have to accept Firefox’ opinion on this or override it).  I run the certificate checking options.


Second, I use DuckDuckGo Privacy Essentials.  This has a very simple interface for tracker blocking.  It should be redundant, as should several items listed below.  I think of it as just another layer.  I never use Google for search, except through an option (!g) provided by DuckDuckGo.

Third, NoScript.  To watch YouTube, for example, I have to temporarily allow YT to run scripts.  You can do that permanently if you get annoyed.  I erase them immediately after watching a video with the sixth item, below.

Fourth, I have a Firefox add-on called Multi-Account Containers.  It lets you set categories named whatever strikes your fancy, and assign to those categories any URL(s) you wish.  This creates separate containers for websites by category. Cookies downloaded by one Container are not visible to other Containers.  You will immediately see the advantage of isolating the cookies. Facebook could not see any of my Twitter visits for example, even if I used either of them.

Fifth, I use Privacy Badger from EFF.  Another simple interface blocker.  Presents sliders in red, yellow, green about the tracking attempts.  Again, should be redundant.

Sixth, there is a Clear Browsing Data add-on which I use immediately after visiting any site I’m forced to use.  I will know what URLs were the offenders by having had to permit them in one or more of the above add-ons.  It deletes:
Browsing history
Cached images and files
Autofill form data
Download history
Service Workers
Plugin data
Saved passwords
IndexedDB data
Local storage data

Seventh, Canvasblocker.  Which blocks pixel image based trackers.  SB redundant to the builtin Firefox option on fingerprinting.

Also, in front of that, and applying to all traffic (email, for example) are Freedome VPN and F-Secure X-Fence.  The VPN makes my IP appear to come from Miami, New York, or elsewhere depending on my mood.  I switch randomly.  It also encrypts all the traffic so my ISP has no idea what I’ve done and can’t commercialize any of my interactions.  Freedome also provides a list of “harmful” websites and you have to override warnings to see them.  Interestingly, I’ve reported half-a-dozen false positives to Freedome and they’ve removed the blocks.  I’m pretty sure the complaints which caused them to red-flag those sites came from SJWs.  Nothing remotely harmful to the sane was on any of them.

X-Fence monitors every attempt to write anything on my machine.  (Turn it off for any software update.)  It lets me decide to allow or deny; once, until quit, until restart, or forever.  Of course, you have to let your browser write cookies, or it won’t work, but then the add-ons above come into play.  I’m able to block incessant ‘updates’ from Adobe and other apps.  These are not cookies, but executables, and they are still trackers.

At first, this whole thing can be a big pain.  Especially X-Fence.  You have to decide which of many arcane processes you will allow, though the “learning mode” eases that pain considerably.  This is true to some extent with NoScript, too.  After a week, this drops off dramatically and you will have learned a lot.

Should you not wish to go to this trouble, I’d recommend Privacy Badger, Firefox Privacy, DuckDuckGo Privacy and Multi-Account Containers.

I can’t comment on the level of interaction required for just that subset, but I’m sure it will break some sites and require your intervention (you can just turn off the first 3 and I anticipate no problem from the cookie isolator) if something doesn’t work.  I have customized my banking, for example – it is interesting when they change their scripts and cookies – it lets me look at what they’ve done and it would surely cause a spoofed website to fail.

Oh, and I run Sophos malware scanning in real time.  

All the above are free excepting the VPN.


‘Ideological Enforcement:’ Twitter Blocks Heritage Foundation Director Over Trans Sports Tweet

Expert Psychologist Blocked on Twitter for Expressing Clinical Opinion on Transgenderism

Twitter is a virus of the mind

I have a GAB account, @Hershblogger, but I don’t use it a lot. Just don’t remember to, and AFAIK there’s no API to connect it to new posts.

Thanks to the Internet of Things

Your trash disposal habits might now require a small EMP generator before you can safely throw away a lightbulb.

Recycling is definitely contraindicated without that EMP.  Or a 2 pound sledge (wear eye-protection).

The people scanning the conveyor belt to sort actual trash out of the recycling stream could quickly “monetize” burned out lightbulbs without even the bother of diving into a dumpster, and without any computer skills whatever.

Discarded smart lightbulbs reveal your wifi passwords, stored in the clear

I am quite sure this does not apply only to IoT lightbulbs.

The future is stupid, but not stupider than LIFX management. They sell you electronic security breachers so you can implant them yourself. Which would make you the stupidest.

The engineers at LIFX did not encrypt the RSA key on their “smart” lightbulbs, so an enterprising garbage collector who’d ‘learned to code’ could have root access to your home WiFi because you threw one away.

It isn’t believable that the engineers at LIFX failed to understand this problem.

Therefore, it wasn’t the engineers who decided to ship these Trojan Horses.

Therefore, protestation from LIFX that they’ve cleaned up their act is incredible.

That is, it is as credible as Google and Facebook when they claim they protect your privacy – even though selling it is how they prosper.

This is not to say LIFX planned to harvest your WiFi passwords.  It is to say they just didn’t give a shit.

I can’t wait until lightbulbs speak like HAL… I wonder if you can get HAL’s voice on Alexa or Google Home?

“Light?… Off.”
“Sorry, I can’t do that _your name here_.”

Sadly, most Millennials wouldn’t get the reference, not having seen 2001: A Space Odyessy. I’m sure they are installing these bulbs in their parent’s basements.