The last several weeks have been filled with the debate surrounding whether AI labs should pause training their AI systems for a six-month moratorium in order to create more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal state-of-the-art systems.
Our take? This is wishful thinking at best, and a foolish notion at worst.
Creating a framework for safer, robust, etc. AI is important
Wishful thinking or not, the wish is important. ChatGPT and its genre are creating a new economy that bolsters an entirely new means of creation. And that includes students cheating on their homework and exams, and fake accounts popping up everywhere.
From professors to marketers, we are nowhere near ready for the misuse of these new technologies. But that doesn’t mean a pause will work, or even be entirely possible.
No framework for what to pause and not to pause
The open letter asking for a moratorium doesn’t ask for a pause on all AI development. In fact, it is unclear what a “system more powerful than ChatGPT-4” even constitutes. Does that mean all large language models are out the window? What about specific applications in certain niche markets?
Unlike China, the U.S. doesn’t have a mechanic to deploy and enforce such abrupt changes in its governance system — whether that’s for better or for worse. It runs counter to the country’s free market philosophy. To suddenly evoke a pause on ambiguous values — such as whom AI should be loyal to — will only fuel an incredulous amount of unproductive debate. And that’s what has happened over the last few weeks.
A capitalistic society needs a capitalistic intervention
This sounds terrible, but that’s how the U.S. system works. The entire U.S. economy is moved by the creation of new opportunities which can be intervened through fiscal policies, not by sudden enforcement of debatable values.
But frankly, given that the Biden administration has already grossly exceeded its budget and Republicans are already breathing down its neck to cut spending, any new fiscal policy looks nigh impossible. Not to mention it probably won’t help Biden win a 2024 election either. There is simply no real political motivation towards pushing this agenda.
Let’s face it: 6 months won’t be enough
Even if we were somehow able to get the moratorium going (which would be an incredible feat in itself) I cannot see a scenario in which all parties can come together to agree on a practical framework that would guide the development of a large language model.
Take autonomous vehicle public policy for example. We have been developing that framework since the early 2010s. That’s about ten years, given or take, and twenty times the length of AI policy. And while we’re close, we are still not done yet.
Another example? The FDA still hasn’t even been able to put together a framework to evaluate AI usage for diagnostics.
And here we are talking about the application of very powerful technologies in a much broader context that touches upon countless industries? We are going to need a lot more time than six months, and even then…
We’re going to fuck it up
And then we will use money to fix our fuck up. Just like how we trashed our global climate and now pour billions into fixing it.
Again, the U.S. is driven by the opportunity to make money. As of now, there is no opportunity to make money by creating a safer (and all that jazz) AI. It is nice and great for society, but we are not the Nords. We are nowhere near being socialistic enough to do this for the “Goodness of the World.” It’s a sad, harsh, reality, but it’s not a false one. In fact, there is more money to be made by letting AI be unsafe, inaccurate, and unloyal state-of-the-art, so while it is politically correct to say otherwise, the action will follow the money, not the other way around.
Then, like usual, once we fuck things up enough, we will realize that we will lose money if this keeps running — just like how hurricanes wash everything away more and more and we realize our real estate is soon going to be underwater so now we have to fix the climate.
From there, we will see an AI security opportunity, just as we have cybersecurity. We didn’t just start the internet and realize we need cybersecurity shortly after. No, we started the internet, decided it was a wonderful thing, used it a lot, and then some people said, “Hey, let’s just move crime online.” Only after we lost a lot of money, did people start cybersecurity companies.
This is history on repeat. The name of the game will just add the word “AI” in front of security, and then on we go again.