5 min read

Pause?

Should we take a six month pause on training systems more powerful than GPT-4?
Pause?

Last week reminded me of Jan 2020, when folks on Twitter were saying, "This Coronavirus graph looks exponential, and people generally don't appreciate how quickly exponential curves get big."

The feeling was catalyzed by a combination of OpenAI releasing plugins, which I saw described somewhere as "giving ChatGPT 5000 arms," reading a paper by Microsoft Research saying that GPT-4 "exhibits sparks of Artificial General Intelligence," and my trivial experiments with having GPT-4 write code and thinking, "Huh. Software that's really good at creating software. That sounds like reproduction..."

The longterm possible scenarios of AI's impact on society range from "utopian" to "catastrophic." I don't know which way things will fall in the long run, but I can confidently make three claims about the short-run:

  1. AI will have a significant impact on our society and systems.
  2. AI is advancing very fast. 
  3. Our societies and systems generally do not respond well to rapid change.

This week an open letter was published calling on "all AI labs to pause for at least six months the training of AI systems more powerful than GPT-4." Some key quotes from the letter:

Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems... and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Pause Giant AI Experiments: An Open Letter - Future of Life Institute
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

The arguments in favor of a pause seem prudent, but I'd be shocked if it happens. We're in a new arms race, and everyone's incentivized to floor the pedal or be left behind. I also have low confidence that the US government can hold substantive conversations and take meaningful, appropriate action on such a highly technical topic when we can't even keep kids from getting shot in school.

But even if the United States could press pause, should we? Tyler Cowen wrote about this yesterday Existential risk, AI, and the inevitable turn in human history

Given this radical uncertainty, you still might ask whether we should halt or slow down AI advances.  “Would you step into a plane if you had radical uncertainty as to whether it could land safely?” I hear some of you saying.
I would put it this way... We are going to face that radical uncertainty anyway.  And probably pretty soon.  So there is no “ongoing stasis” option on the table.
I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?”  And believe me, if we do nothing yes we will re-enter living history and quite possibly get nothing in return for our trouble.
With AI, do we get positives?  Absolutely, there can be immense benefits from making intelligence more freely available.  It also can help us deal with other existential risks.  Importantly, AI offers the potential promise of extending American hegemony just a bit more, a factor of critical importance, as Americans are right now the AI leaders.  And should we wait, and get a “more Chinese” version of the alignment problem?  I just don’t see the case for that, and no I really don’t think any international cooperation options are on the table.  We can’t even resurrect WTO or make the UN work or stop the Ukraine war.
Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence?  That has not the self-confidence to confidently confront a big dose of more intelligence?  Dare I wonder if such societies might not perish under their current watch, with or without AI?  Do you really want to press the button, giving us that kind of American civilization?
Existential risk, AI, and the inevitable turn in human history - Marginal REVOLUTION
In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least. For my entire life, and a bit more, there have been two essential features of the basic landscape: 1. American hegemony over much of the world, […]

Yesterday, Ben Thompson concluded an article called ChatGPT Gets a Computer with this:

I agree with Tyler Cowen’s argument about Existential Risk, AI, and the Inevitable Turn in Human History: AI is coming, and we simply don’t know what the outcomes will be, so our duty is to push for the positive outcome in which AI makes life markedly better. We are all, whether we like it or not, enrolled in something like the grand experiment Hawkins has long sought — the sailboats are on truly uncharted seas — and whether or not he is right is something we won’t know until we get to whatever destination awaits.
ChatGPT Gets a Computer
It’s possible that large language models are more like the human brain than we thought, given that it is about prediction; that is why ChatGPT needs its own computer in the form of plug-ins.

Header image was generated by asking GPT-4 to "write me a html/javascript/css program that generates pixel art that demonstrates the concept of a pause. take up the whole screen. black canvas."


Update: An earlier version of this post included names of signatories, but some of the folks said to sign the letter didn't actually sign it. I removed all the names of signatories from this post since I can't confirm them.