The Week in AI: Who Controls The AI Printing Press?
Banning technology has never worked before, so why are people convinced it will now with AI.
“We think it can be a printing press moment,” Altman said. “We have to work together to make it so.”
This week, Sam Altman (OpenAI CEO), IBM chief privacy officer Christina Montgomery, and NYU professor Gary Marcus testified before the Senate Judiciary Committee.
If AI is indeed a printing press moment (and I’d argue it’s more significant in terms of impact), then we should not be surprised to see large companies, institutions, and politicians band together to ensure control and monopoly.
The worldwide spread of the printing press meant a greater distribution of ideas that threatened the ironclad power structures of Europe.
In 1501, Pope Alexander VI promised excommunication for anyone who printed manuscripts without the church’s approval. Twenty years later, books by John Calvin and Martin Luther spread, bringing into reality what the Pope feared.
Since the 20th century, despots and politicians have exerted enormous effort to stifle, control, limit, and eliminate free speech. The first thing any totalitarian form of government does is come for your speech. These efforts are ongoing and have picked up steam over the past decade. Governments worldwide, especially in Europe, are competing on how fast they can destroy free speech (and free thought).
That’s because governments are acutely aware of their failure to regulate social media and the internet. Senators at the hearing affirmed that they intend to learn from their past mistakes with data privacy and misinformation issues:
“Congress failed to meet the moment on social media,” Blumenthal said. “Now we have the obligation to do it on AI before the threats and the risks become real.”
So they’re seizing this moment to ensure that doesn’t happen again. They fully intend on making sure us peasants understand we’ve been allowed too much freedom, power, and speech—and the party’s over.
Of course, no one dares to admit that’s what they’re doing too openly (though the facade cracks often these days).
Not to worry, Sam Altman is happy to oblige with the perfect cover:
In other words, only the most effective printing presses need to be regulated. You can tinker with your own little printer that breaks often and always runs out of ink—and isn’t powerful enough to make a dent. And you’ll never be allowed near models and AI that are truly useful.
It remains to be seen whether open-source LLMs can become as good as, say, GPT-4 or Claude. And even so, they’re not yet easy to use and may never become ubiquitous like ChatGPT.
Altman helpfully proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”
The play is to ensure OpenAI, Google, Microsoft, et al. remain in complete control with the most powerful models and protect themselves against any competitors with an ironclad and regulated monopoly. The “safety” they all call for is for building their own impenetrable moat.
“This is your chance, folks, to tell us how to get this right. Please use it,” said Senator John Kennedy (R-La). “Talk in plain English and tell us what rules to implement.”
Governments are more than happy to help out. Because, in exchange, this is a rare historical opportunity to exert total control over a populace. Senator Dick Durbin (D., Ill.) was giddy when he remarked that he could not recall a time when representatives for private sector entities had ever pleaded for regulation.
The playbook now for implementing regulatory strangulation on innovation, entrepreneurship, freedom of speech, thought, and action is to shroud AI risk in a dark cloak of science fiction tropes. Think of a powerful AI that launches nukes. If they can keep you terrified of a rogue AI roaming around destroying the world, you’ll accept any regulation or system of control they offer—much like what happened the past three years of a pandemic.
This is a distraction from the real danger:
AI in the hands of governments and a few select corporations that already control our reality via the internet and tech is dangerous. And that’s where we’re headed, into a true WEF wet dream of public-private partnerships in lockstep, which is nothing other than the old gods of communism rising again. AGI’s long march continues, amassing an army of willing and useful idiots.
Now is the time to become familiar with and use open-source LLMs while you still can. If there’s any way to make them as powerful as those controlled by OpenAI and Google, we need more intelligent people using and working on them.
Why does the phrase "A government-corporate regulatory cooperative for the governance of A.I." sound like "The hastening of humanity's doom."?