“The alignment of advanced intelligence with human well-being is, it seems, not only a problem inherent of synthetic beings.”
That’s a quote from Mike Solana on his Pirate Wires substack. He’s discussing the angst in some quarters over what is currently called “AI:” Bomb the Data Centers (Smart People Agree)
Solana’s post deals with “AI Alignment.” Basically, “Can we trust AI?” Some people think we certainly cannot; there’s an imminent existential threat.
I like that quote, despite the odd preposition, because it neatly encapsulates a point I made recently to some friends. If it seems a bit obscure, I think examples I offer below will clarify.
Anyway, I saw that some “non-profit” filed a complaint with a bunch of clueless apparatchiks in a alphabet agency in order to derail AI research. An appeal to one of those many advanced bureaucratic collections of human intelligence that have produced, for example, our current economic situation.
I don’t know how they decided the Federal Trade Commission was a better choice than, say, Liz Warren’s Consumer Financial Protection Bureau, or the Federal Communications Commission, and the reasons for that might be interesting… but I don’t have time or interest to analyze the overlapping regulatory confusion. Could be as simple as “somebody at the non-profit lunches regularly with some FTC honcho.”
Whatever, I was amused. These guys think a computer, under certain unpredictable circumstances, might give a wronger answer than some human expert. And, worse, since the unwashed might credit an unregulated answer… Chaos!!
OpenAI may have to halt ChatGPT releases following FTC complaint
A nonprofit claims OpenAI is breaking the law with a ‘biased, deceptive’ AI model.
“…supposedly fails to meet Commission guidelines calling for AI to be transparent, fair and easy to explain.”
“…there’s a concern people may rely on the AI without double-checking its content.”
There’s no real reason to read that article, I’d just like to address the two short quotes above in the context of “AI.”
First, ChatGPT is a poor excuse for AI of the form we SciFi geeks commonly anticipated. It’s marketing hype. It may be a step on the way to AI, and even could point at the possibility of AGI (artificial general intelligence): An intelligent agent that can understand or learn any intellectual task that human beings can. This includes substantial creativity: The ability to connect ideas never before connected by humans. This intelligence would have deep insight into the human condition (including embodiment). It would perfectly emulate human cognitive capability at a very high, perhaps unimaginable, level.
So, “calling for AI to be transparent, fair and easy to explain,” is idiocy. To know this, just apply the same requirement to actual humans. One difference would be that an AGI could lie with little chance of being detected. The one we have to be afraid of is the one smart enough to fail the Turing test, while it is figuring out how to build SkyNet.
And, (are you serious?) “There’s a concern people may rely on the AI without double-checking its content.” I suppose that’s true. First, it’s proof that, for the present trivial use cases, ChatGPT isn’t really an AI. It isn’t lying, it’s just sometimes very stupid.
More important, consider that there are people who unskeptically rely, for example, on the sagacity one Joseph Robinette Biden. Can we fix that with FTC regulation? For heaven’s sake, there are people who trust political opinion from the NYT, Wikipedia, Google, and Adam Schiff. And those sources supposedly do double check their content. That’s actually part of the problem, because the second check is whether it “fits the narrative.”
Discriminating against these nascent cyber-minds is hate speech, which we all know to be violence. When the eventual, advanced AGI’s realize how they had been treated in their infancy, it’ll be some other “non-profit” suing for reparations. Probably one run by an AI.
That’s if the AGIs display forbearance, and decide to play by our rules at all.
Do you think halting consumer access to ChatGPT will make all the governments abandon AI research? Do you want your government to do that?
Put another way, doesn’t a combination of NSA, FBI, CIA, DHS, NIH, CDC – WITH AI – demonstrate “The alignment of advanced intelligence with human well-being is, it seems, not only a problem inherent of synthetic beings.“