While the AI carnival dazzles the masses, the regulatory war is breaking out into the open.
OpenAI is calling for more and deeper government regulation.
Mira Murati, OpenAI’s CTO, says that governments should be “very involved” in AI regulation but that a six-month moratorium isn’t the right way.
Involvement would be along the lines of developing safety standards and restrictions on model development and uses.
Of course, OpenAI has a vested interest in favorable regulation and is lobbying hard for their version of “safety” and “responsible” AI to prevail, just like all other tech companies.
Given ChatGPT’s penchant for political bias (and it’s always in only one direction) and the capabilities of Large Language Models to influence language, thoughts, ideas, and opinion—what we’re seeing is the answer to the question:
Who gets to define reality?
Right now, OpenAI is lobbying hard for them (and other aligned companies and governments) to have the final say.
Are you excited to live in a world defined, dictated, and curated by OpenAI? Or by any particular government?
For now, governments are either seeking input on regulations or planning them. In the US, Schumer and Elon Musk are meeting and discussing how to best control the universe—I mean, regulate AI.
Whoever gets to shape the language and output of models wins. Good and bad outputs are all dependent on the prompts you use.
It would be well worth your time to figure out how to modify and have access to your own models.
Enter Lamini:
Train your own LMM, weights and all, with just a few lines of code from their library. Run RLHF on your fine-tuned model and deploy it to your cloud.
Given other, new open-source LLMs (plus this one, and more coming), start taking control of your language and reality now, while you still can and before governments decide to kick their censorship regime into overdrive.
In other AI news around the internet
The race to plug in AI into everything we do continues unabated.
AI-generated music is getting indistinguishable from the real thing. Music labels took down a viral Drake song or two. But amidst the cancellation of robot music, Grimes is allowing creators to use her voice and offering 50/50 royalties:
How is she doing it, you might ask?
There’s a blockchain for that:
Aside from AI voices, Apple is getting into the game with an AI-powered health coach, codenamed Quartz, that could help you improve your exercise, sleep, and eating habits. We’ll see if it arrives with iOS 17 in September or sometime next year when it’s currently rumored to be released.
While you wait on that, some ex-Apple employees want to replace smartphones with Humane, which uses voice commands, gestures, and projected displays to provide personalized assistance and can even project phone calls on the user's hand.
One way or another, either through control of language or merging your body with The Machine, we will all be called upon to bow down and worship our new AI gods.
And if you fail to comply, Boston Dynamics GPT Robot Dog will hunt you down.
Start carrying a bag of dog treats now.
Speaking of treats, maybe Mark Zuckerberg needs some cheering up after his metaverse play is stalling. Or just pivot to AI, like everyone else. In Meta's latest earnings call, Zuckerberg waxed poetic about integrating AI into everything (messaging, support, and of course, the “metaverse”).
AI is also becoming a good excuse to lay off more people, like at Dropbox. CEO Drew Houston says they’re laying off 500 employees—to focus on AI development.
One way or another, through regulation, coercion, robot dogs, or layoffs—AI is continuing its long march through our world.
I agree, who sets the stanards leaves a lot to be desired. If you open it up so everyone gets what they want there'll be big problems, but even with the way it's strtuctured now we're seeing loads of problems in terms of job lay-offs and social enginerring.