Zuckerberg’s corollary

In 1889 Herman Hollerith patented a “…method of compiling statistics, which consists in recording separate statistical items pertaining to the individual by holes or combinations of holes punched in sheets of electrically non-conducting material…” The resulting retangular cardboard slips were used, along with machines of Hollerith’s design, to conduct the 1890 census. Other uses were soon found. For example, punching into a time clock… as an aid to counting beans… initiating the second largest voting dispute in US history. Unless you believe Hillary Clinton’s Russiagate theory, in which case it’s the third largest.

By 1947 the name for the tiny individual bit punched from the cards was established as “chad.” This was a syllabic improvement over “confetto,” though a fluster* of chads are still known by the plural, confetti.

It was at this point that the traditional confetti manufacturing cartel began a slow downward spiral, ameliorated only slightly by the gradual disappearance of the competition; ticker tape. Hanging chads were far in the future.

By the 1960s, Hollerith’s invention had surpassed mere statistical applications, simple time recording, mundane accounting, or future farcical election chaos. Individual consumers were being assailed by “IBM” cards mailed as invoices. We were to carefully return those cards with our payment. No folding, spindling, or mutilation.

I began my IT career in those days as a “unit record operator.” The cards being records, and the various machines (which on occasion mutilated the cards beyond the imagination of the most malicious consumer) being the units operated. Or maybe the single cards were considered units of record. Never thought about that until just now.

In any case, this employment gave me the opportunity to experimentally modify such dunning cards as I received. What would happen if I “overpunched” the amount due? This single new hole meant the figure was owed to me by the vendor, not to the vendor by me.

What would happen if I put a tiny red rectangular bit of tape (designed for the purpose of correction) over the hole of the leading number in the amount due? This would reduce the payment requested, from say, $13.50 to $3.50.

The Columbia Record Club, a thoroughly analog enterprise, was unfazed. Presumably, some other unit record operator had to correct the card when it was rejected by their mainframe, but I never heard about it. I suppose they anticipated enough folding, spindling, and mutilation to have procedures for it.

Meanwhile, computers got much smaller and much cheaper. Confetti dropped off the Sierra Club’s top ten list of industrial waste. Computer errors no longer had to be fixed by anonymous specialists wielding keypunch machines the size of a love seat. Computers and computer errors were being democratized.

By the 1980s the general public was regularly hearing excuses from smaller businesses dependent on computers: “It’s not my fault, the computer says you didn’t pay that invoice,” or “The computer won’t let me give you a refund.” This was mixed blessing. Immediate knowledge that the business was not set up to serve you has value, but it lacks the opportunity for careful sarcastic refinement provided by the calm pace of mail correspondence with a Record Club.

We hear the generic excuse, “I can’t help you, our computer is messed up,” less frequently these days. Of course, we’re less often interacting with a human, from whom it could be just a generic excuse for inaction. Computers do not offer excuses, nor do they argue.

In the 1980s-90s I was a partner in a small business selling custom software, technical support, and business counsel to mostly small business. All my employees were required to read, as cautionary advice, Gordon R. Dickson’s 1965 short story Computers Don’t Argue. I like to think the lesson improved our software. Thereby improving the services of our customers to their customers.

These days there is a lot of bloviating about advanced computer algorithms as artificial intelligence: “AI.” We’re to regard the mindless mining of petabytes of data as intelligence. It isn’t intelligence, much less sentience. And it’s surely not sapience, which is implied as just around the corner. Get back to me when you can actually argue with a computer. Turing’s test has been found inadequate, and playing chess doesn’t qualify as intelligence.

That does not mean that the Algorithm Intelligence of Google, Facebook, Apple, Twitter, Instagram, TikTok, Amazon, Pinterest, etc., etc., etc., etc, isn’t dangerous. That not yet even Paramecian level of intelligence may well destroy our chances of ever knowing Artificial Sapience. The jury is out on the benefit of that possibility.

In the 21st century computer blaming tends more toward, “The network is slow today.” Something with which customers are familiar, and unrelated to the specific business. If some clerk tells you their computer is impeding commerce at your expense, you take your business elsewhere. Corporations have outsourced their keypunch data entry departments to consumers.

Most people have learned to reject deflective technocratic BS as an excuse for poor service. They have, however, not learned to get off Facebook – so maybe natural intelligence is overrated.

We have reached a new level of sophistication, some strange limited modified reverse hangout neo-luddism… Here is a company whose computers are indistinguishable from their business, blaming those computers for their business problems when the computers operate as designed:
Facebook Blames ‘Technical Issues” for Its Broken Promise to the US Congress

So, here’s Zuckerberg’s corollary of Hollerith’s patent: “a method of compiling statistics, which consists in recording every separate statistical item pertaining to all individuals by holes or combinations of holes punched into the fabric of society.”

*TOC group name nominee, though I would consider tumult, furore, agitation, or ado.

Modeling models

Perhaps if transistors hadn’t been invented then running VisiCalc’s descendant, Excel, on a vacuum tube computer would show you the real meaning of global warming…

Let me back up. I want to talk about computer models, starting with some I was running in the early 1970s.

I worked for a company whose business was replacing the mainframe computers of its clients by renting time on much larger ones we ran for the clients. The clients used various forms of telephone connections, primitive by today’s standard. A 57 kilobit leased line would be a high speed example. No network, a point to point serial line.

Anyway, the modeling we did was to simulate what it would cost prospects to use our services. As input we were able to get quite precise data about the number of bytes read & written, lines printed, CPU cycles consumed, hard disk capacity, number of lines of code executed, etc., for all the computing done on the machine we were proposing to replace. We also did this for clients contemplating new applications.

We had a great deal of complexity to deal with, but it was well documented, well known and precisely accurate. We also had incentive to get it right because, profitability. We exhaustively tested each new IBM system software release against our model. We continually verified its assumptions across several different mainframe architectures.

Not only that, but the easy stuff was 80% of the model. Mostly this consisted in sorting data into different sequences required by the programs. With dependable database software, this aspect of computing has mostly faded away. Well, except for those still running 1970s software, like New Jersey. (The comments at that link are interesting, too.)

Sometimes, though, even given all our knowledge, we discovered there were things we didn’t know. Usually, not knowing these things turned out badly.

Like the cost of a CICS transaction… You don’t care what that means, I’ll spare you the details. Short version, one customer had creatively designed a system that made CICS use 5 times the expensive resources our exquisitely constructed model assumed.

Even in a nearly closed system, with highly accurate and detailed information about a mechanistic process, with monetary incentive – we could get the wrong answer. Because of human innovation.

Anyway, we used 80 column punch cards to construct the individual models and then fed them into the mainframe. Punching the wrong hole, or punching it in the wrong place had serious consequences in this tedious process. The output was checked meticulously. Tweaking a parameter meant changing the whole construct, not just one parameter, and another run on the mainframe. It was labor and compute intensive.

A little later, I purchased a personal computer, a TRS80 Model I. I also obtained a copy of one of the most important programs ever created for microcomputers. VisiCalc.

VisiCalc was intoxicating! I could change one cell and watch the effects ripple through the spreadsheet in seconds. The need to be meticulous didn’t go away, but errors were easily and quickly corrected. Assumptions were testable for reasonableness immediately.

What gradually did go away were constraints on believing the output. I watched this happen in a consulting career using such tools (Lotus, Excel) to advise my clients. Despite my decidedly cautionary advice about what we didn’t know we didn’t know, vanishingly few were appropriately skeptical.

“Yes, I am knowledgeable and trustworthy. Yes, that output reflects what you told me. But, neither of us even can know enough.”

This extended introduction brings us to two sets of models now being used to control our lives: Models of the CCP Pandemic (known to the politically correct as COVID-19) consequences and Catastrophic Anthropogenic Global Warming (known, until the next terminology rehash, as “Climate Change”).

Differences between my 1970s models and those: I knew much more about vastly fewer model parameters and their limits; had devastatingly superior, proven data; dealt with a non-chaotic system; and had greater personal consequences for inaccuracy.

The main differences between the CCP and CAGW sets of models is that the CCP models are simpler and have a much shorter time scale.

The similarities for the CCP and CAGW sets of models is that they have been wildly wrong and are used to argue for massive government expenditures, limitations on freedoms, and citizen surveillance.

Some are even connecting the two. I can see why.