Perhaps if transistors hadn’t been invented then running VisiCalc’s descendant, Excel, on a vacuum tube computer would show you the real meaning of global warming…
Let me back up. I want to talk about computer models, starting with some I was running in the early 1970s.
I worked for a company whose business was replacing the mainframe computers of its clients by renting time on much larger ones we ran for the clients. The clients used various forms of telephone connections, primitive by today’s standard. A 57 kilobit leased line would be a high speed example. No network, a point to point serial line.
Anyway, the modeling we did was to simulate what it would cost prospects to use our services. As input we were able to get quite precise data about the number of bytes read & written, lines printed, CPU cycles consumed, hard disk capacity, number of lines of code executed, etc., for all the computing done on the machine we were proposing to replace. We also did this for clients contemplating new applications.
We had a great deal of complexity to deal with, but it was well documented, well known and precisely accurate. We also had incentive to get it right because, profitability. We exhaustively tested each new IBM system software release against our model. We continually verified its assumptions across several different mainframe architectures.
Not only that, but the easy stuff was 80% of the model. Mostly this consisted in sorting data into different sequences required by the programs. With dependable database software, this aspect of computing has mostly faded away. Well, except for those still running 1970s software, like New Jersey. (The comments at that link are interesting, too.)
Sometimes, though, even given all our knowledge, we discovered there were things we didn’t know. Usually, not knowing these things turned out badly.
Like the cost of a CICS transaction… You don’t care what that means, I’ll spare you the details. Short version, one customer had creatively designed a system that made CICS use 5 times the expensive resources our exquisitely constructed model assumed.
Even in a nearly closed system, with highly accurate and detailed information about a mechanistic process, with monetary incentive – we could get the wrong answer. Because of human innovation.
Anyway, we used 80 column punch cards to construct the individual models and then fed them into the mainframe. Punching the wrong hole, or punching it in the wrong place had serious consequences in this tedious process. The output was checked meticulously. Tweaking a parameter meant changing the whole construct, not just one parameter, and another run on the mainframe. It was labor and compute intensive.
A little later, I purchased a personal computer, a TRS80 Model I. I also obtained a copy of one of the most important programs ever created for microcomputers. VisiCalc.
VisiCalc was intoxicating! I could change one cell and watch the effects ripple through the spreadsheet in seconds. The need to be meticulous didn’t go away, but errors were easily and quickly corrected. Assumptions were testable for reasonableness immediately.
What gradually did go away were constraints on believing the output. I watched this happen in a consulting career using such tools (Lotus, Excel) to advise my clients. Despite my decidedly cautionary advice about what we didn’t know we didn’t know, vanishingly few were appropriately skeptical.
“Yes, I am knowledgeable and trustworthy. Yes, that output reflects what you told me. But, neither of us even can know enough.”
This extended introduction brings us to two sets of models now being used to control our lives: Models of the CCP Pandemic (known to the politically correct as COVID-19) consequences and Catastrophic Anthropogenic Global Warming (known, until the next terminology rehash, as “Climate Change”).
Differences between my 1970s models and those: I knew much more about vastly fewer model parameters and their limits; had devastatingly superior, proven data; dealt with a non-chaotic system; and had greater personal consequences for inaccuracy.
The main differences between the CCP and CAGW sets of models is that the CCP models are simpler and have a much shorter time scale.
The similarities for the CCP and CAGW sets of models is that they have been wildly wrong and are used to argue for massive government expenditures, limitations on freedoms, and citizen surveillance.
Some are even connecting the two. I can see why.
Like this:
Like Loading...