Mortality

This is the fourth post in AI for Mortals, but really it’s the beginning.

What’s come before has been a preface: a serious beginner’s introduction to what the new AI is and how it works. Here are those posts:

If you don’t (yet!) know anything about the new AI — generative AI — or if what you know has been limited to the confusing and often superficial/sensational/inaccurate portrayals in the popular press, please consider starting with these posts.

From here on, our focus will shift: we’ll still be talking about what the new AI is, but our main topic will be what it means for mortals like us.

I should tell you that this particular post goes to some dark places. I promise the sun will be coming out by the end, and future installments of AI for Mortals will be brighter!

Ask — or tell — me anything

I’ll pin this milestone post to the AI for Mortals homepage, where I hope it can attract discussion, not just about what’s said below, but anything you want to talk or ask about. If a question, comment, or bit of AI news is interesting to you, it’s probably interesting to me, and to other AI for Mortals readers. Please do consider sharing it in a response here. (Responses to other posts are very welcome too, of course, and you’re always welcome to email me directly.)

For those who have been with AI for Mortals since it was a humble Google Group, responding here takes the place of sending mail or doing a Reply All to that group, except that it won’t add traffic to others’ inboxes. So please, fire away!

Being mortal

In Part 1 of the introduction, I had a little fun with the word mortal:

When I talked to some of you about the possibility of doing this, you smiled and referred to it as “AI for Dummies”. That’s kinda right, in that this is for people with zero background in tech. But I’m going with AI for Mortals. Cute, huh? Partly it’s just that none of you are dummies! But…

But… if mortals isn’t just a more respectful way to say dummies, then what is it?

Stepping back from this little newsletter, in the cosmic sense

We are mortal beings with immortal aims.

I found these words attributed to Lailah Gifty Akita (in goodreads). I don’t know this writer, and couldn’t find the original context. But I like what she has captured here.

Having leapt from the earth unchoosing, we find ourselves in a particular place at a particular time, our fate in the hands of forces we don’t control. Yet, with a myriad others, each in their own time and place, we find ways to paint the world with awareness and hope, intention and agency, and — when we can — joy.

Against a background of the Ganges shore and a black night sky, a young man in ceremonial attire holds a tower of flame aloft.
A mortal illuminates his part of heaven and earth with the Hindu Aarti fire. (Photo by the author.)

It’s not a matter of knowing little or knowing a lot. In Zen Buddhism and many other places, cultivation of a beginner’s mind is wisely recommended for novices and advanced practitioners alike. Consider it being a dummy raised to a fine art.

By analogy, the mortals in this newsletter’s title are those for whom the new AI is a fate we don’t control. Unless you’re a billionaire, a tech CEO, or a head of state (and maybe even then, but that’s another story), this is you. The new AI is upending your world, and in that you have lots at stake but little say. This is true whether you’re a spring green newcomer to AI, or a research scientist at a top lab.

For this subject — something utterly new under the sun — beginner’s mind is exactly the right prescription. We’ll see again and again that attempts to understand the new AI via familiar paradigms (is it the new search engine? iPhone? social media? printing press? crypto?) provide minor insights at the cost of obscuring the big picture playing out right before our eyes.

What are the “immortal aims”, in Ms. Akita’s words, that can help us reach beyond seeming disempowerment? As citizens, consumers, and developers, what awareness do we need, what hopes and intentions shall we pursue, and how do we find our agency?

I’ll always let you know how I view these things, but here’s my real hope for AI for Mortals: that it will be of use to you as you think about them for yourself.

What’s your p(doom)?

Of course, there’s more to mortality than being subject to forces you don’t control. There’s also the whole “we’re all gonna die” thing.

According to the following statement, posted on March 30, 2023 by the Center for AI Safety (CAIS) and signed by hundreds of AI stars and superstars, AI may have an exciting role to play in our demise:

Text of the CAIS statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
A typically understated perspective from the AI cognoscenti.

Extinction! Well… that’s a bummer.

Signatories include Geoffrey Hinton and Yoshua Bengio, two of the three Turing Award winning scientists regarded as the “godfathers of deep learning” (which the new AI is based on). Also Demis Hassabis, Sam Altman, and Dario Amodei, who are the CEOs of Google DeepMind, OpenAI, and Anthropic respectively, the currently leading developers of frontier AI models. Also Bill Gates. Bill McKibben. Kevin Scott, the Chief Technology Officer of Microsoft. A host of well-known professors, government officials, scientists, and other notables. Grimes is there, though not her sometime partner Elon Musk; a bit of a surprise since he’s famously an AI doomer.

The 2023 Expert Survey on Progress in AI canvassed 2,778 published AI researchers. In one question, the survey asked respondents whether they believe superhuman AI (which most agree is on the way) will be on balance good or bad for humanity. About two thirds said they think more good than bad, but

of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction.

I’d hate to see the pessimists!

Another set of questions asked about respondents’ p(doom) — that’s slang for what you think the chances are that advanced AI will lead to worldwide human catastrophe. (The survey didn’t use this specific term, which doesn’t have a precise or consistent meaning even within the AI safety community.) On average, respondents estimated the probability that future AI will cause “human extinction or similarly permanent and severe disempowerment of the human species” to be 16.2%. Better than the odds of blowing your head off in your first try at Russian roulette.

Why don’t we just stop?

We all agree on the correct answer to the Russian roulette risk: don’t play.

Taken at face value, the extinction statement and high p(doom) estimates seem to suggest a similar answer for AI. But no one’s stopping; on the contrary, we’re accelerating, and many of the most aggressive drivers of acceleration, such as the CEOs of leading AI companies, are the same people signed on to doomer or doomer-adjacent points of view.

Why is this? (Disclaimer: most of what I say about this is opinion, in some places speculation. You be the judge.)

It’s worth noting that some people have stopped, taking themselves out of the game to advocate for AI safety, or simply to avoid contributing to something they don’t believe in. The most famous example is Geoffrey Hinton, who resigned from Google in May of 2023 to be able to “freely speak out about the risks of A.I.” So of the three Turing Award winning “godfathers”, Hinton is now largely a doomer, Yoshua Bengio remains active in AI development but signed on to the extinction risk statement, and Yann LeCun remains an unabashed booster.

Reality check: no one could honestly believe defections are materially slowing AI progress. A massive flood of interested individuals continues to pour into the field.

In some cases, the concerns people express are surely disingenuous in the first place. For example, I’m sure Sam Altman is at least partly serious when he says OpenAI is developing AI to “benefit all humanity”, just as I’m sure sincerity was somehow involved when he and his co-founders named their now closed, black box company. I’m equally sure his concerns will never lead him to give up his power in the industry or dilute his company’s competitive position, and therefore I’m sure no matter how concerned he becomes, he won’t be pressing for deceleration. And I’m sure he’s aware a visible commitment to long-term responsibility helps OpenAI attract and retain talented employees, diverts attention from more immediate safety issues, and helps OpenAI position itself as a leader in defining the regulatory climate.

Some people continue to work in the field so they can be voices for safety within organizations, or hands actively working on safety measures. Some reason that “If I don’t do it, someone worse will.” There’s a geopolitical version of this: if my nation doesn’t compete in AI, we’ll be at the mercy of nations that do. These are entirely legitimate things to think about in view of the manifest reality that simply quitting doesn’t slow down the train.

None of these things is the most important factor. Rather, it’s this: most of the people sounding the alarm about AI risks also believe these technologies promise world-changing benefits. They very reasonably want to achieve the benefits while avoiding the harms. The extinction statement doesn’t call for ending AI development; its message is that “Mitigating the risk…should be a global priority.” Similarly, like other survey-takers, people reporting their p(doom) aren’t conducting a scientific analysis; they’re trying to tell us something. In the case of the “optimistic doomers” mentioned in the 2023 Expert Survey (above), I believe the only explanation for their responses is that they believe the risks can be mitigated, and they’re urging us to make sure that happens.

Are they right about that? That the risks can be mitigated? How sure do you have to be when the price of being wrong might be human extinction?

Will we lose control?

There’s ongoing furious debate among very, very smart people about whether we’re destined to lose control of AI, and if not, what it will take to make sure we don’t.

This is actually a pretty easy question, if you approach it with your beginner’s mind. Do you see it?

The answer is no. We’re not going to lose control of AI, because you can’t lose what you never had. Consider the AI that runs Meta’s Facebook platform. Expert technical analyses can try to shed light on how Meta’s wizards can set goals for that AI, and what could go right or wrong with keeping its actual behaviors aligned with those goals. But that’s at the micro level of what each neural network does at the point of each operation: sentiment classification, semantic embedding, whatever. It tells us nothing about the macro impact of the integrated platform. More importantly, from the mortal point of view, who cares? Meta’s goals are not our goals, and we are, in the status quo, powerless to affect them.

Some of you may be thinking, “Well, I just don’t use social media.” But if you think that means you’ve avoided the harmful (and beneficial) effects of the way social media AIs work, you’re wrong. Facebook and similar platforms surveil you whether you have an account with them or not. More importantly, regardless of whether you use their products, even if you’ve never touched a computer or phone in your life, you’re living in a world they’ve drastically altered.

This specific point makes social media an important cautionary tale with respect to AI. I might do a whole post someday on how our collective behavior, as mortals, has led to us getting less benefit and more harm out of Twitter than we might have in an alternate universe where we — especially non-users — were paying better attention. We mustn’t let the same thing happen with AI.

None of this should be surprising. It’s about the scale at which these systems operate. We — humanity as a whole — mortals — simply don’t know how to assemble intention and act coherently at global scale. We see this when we look at AI, but we see it equally when we look at climate curves, political dysfunction, or endless war. As I wrote in a comment on lesswrong.com last year:

Humanity doesn’t have control of even today’s AI, but it’s not just AI: climate risk, pandemic risk, geopolitical risk, nuclear risk — they’re all trending to [existential risk], and we don’t have control of any of them. They’re all reflections of the same underlying reality: humanity is an infinitely strong infant, with exponentially growing power to imperil itself, but not yet the ability to think or act coherently in response. This is the true threat — we’re in existential danger because our power at scale is growing so much faster than our agency at scale.
This has always been our situation. When we look into the future of AI and see catastrophe, what we’re looking at is not loss of control, but the point at which the rising tide of our power makes our lack of control fatal.

Just over a year ago, earlier in the same month CAIS published its extinction risk statement, the Future of Life Institute released its own open letter entitled Pause Giant AI Experiments. It currently bears over 33,000 signatures, including many of the same ones as CAIS’s statement (even Elon Musk this time!) The letter asked all AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” and says that if this can’t be done quickly, “governments should step in”.

There was a lot of support, a lot of publicity, but there’s been no pause. The intervening year has been one of ever-accelerating development by an exponentially growing set of players on an ever-expanding range of projects. It’s emblematic of the degree of control we mortals (do not) have over AI, not to mention the other existential threats. On that front, there’s nothing to lose.

Mortal beings

According to all this, we don’t need to fear loss of control, but only because we’ve already lost it. We aren’t trying to defend a safe space against disruption, we’re already on the brink, in danger of losing our hold on many fronts, AI but one among them. Meanwhile our collective, uncontrolled power to harm ourselves continues to accelerate.

What do you think? Does this accord with your recent experience?

If so, how do we live with it? It’s not a rhetorical question, and I’m sure you’ve thought about it plenty. Aside from the awareness we all have of our mortality as individuals, anxiety for the near-term continuation of our species is now widespread and widely recognized. This isn’t all about AI: the preceding link actually references climate anxiety, and many of us alive today can vividly remember — or still experience — convictions of doom related to other threats including nuclear weapons and pandemics. That said, there are young people today who have lost interest in financial security or resolved not to have children due to their fear of near-term AI-driven catastrophe.

Better minds than mine have addressed these questions, but here’s my take: we need the humility to recognize that it’s not given to us to know how such huge things are going to work out. It’s not our business really. Our job is to help our fellow mortals, past, present, and future, paint the world with awareness and hope, intention and agency, and — when we can — joy.

I once watched a lecture at a chess tournament where someone was going over a game, discussing the moves available to one of the players in a given position. As he explained why one specific move was the best choice, someone in the audience interrupted. “But isn’t Black still losing here?” The speaker paused; you could see the wheels turning as he considered just what this questioner needed to hear. Finally he said, “The grandmaster doesn’t think about winning or losing. The grandmaster thinks about improving their position.” I don’t remember who won that game, but I remember the lesson, and it applies to a lot more than chess.

Let us be grandmasters. Let us be serious about our mortality, but not deadly serious. Lively serious, making the best moves we can, improving our position. We don’t know our timelines, but we know it’s not our work alone. Our fellow mortals have been, are, and will be doing it with us. Let us shine only light upon them.

“Immortal aims”

Near the top of this post, I made an analogy between our individual status as mortals in the cosmos and our disempowered position with respect to AI. Taking a cue from Laila Gifty Akita’s words — We are mortal beings with immortal aims — I asked what our “immortal aims” should be in the AI world. What should we believe and what should we try to do that can have an impact on the AI powers that be?

What follows is my take (or more accurately, the bare beginnings of a take) on that question. As a citizen, a consumer, and perhaps a developer, I hope you’re thinking about your own.

Where the new AI fits in

I don’t know if it’s a surprise given everything I’ve said so far, but I’m not in favor of trying to stop or slow AI progress. (I also don’t think it’s possible, but even if I did, I wouldn’t want that.)

By the time I first encountered the new AI, I had already been stuck for years trying — in a regular person, amateur way — to think about the problems of human agency at scale. At that time, AI itself wasn’t on my list of concerns; it was about things like the climate crisis, political/social dysfunction, and economic inequality.

In all these areas and more, the ability of mortals to exercise power — not just as individuals, but even collectively — wanes to nothing as one ascends the ladder of scale from the local arena to the regional, national, and global. The consequences of this disempowerment appear increasingly problematic. It couldn’t be more clear than in dwindling prospects for meeting climate targets, devastating wars launched to advance the political interests of specific politicians, and the prospect of seeing within the next few years the world’s first trillionaires.

Two things have stood out to me as impediments to mortal expressions of intention and agency at higher levels of scale:

  • Massive volumes of detailed information become so overwhelming that only large and powerful organizations (or extremely wealthy individuals, able to hire armies of lawyers and accountants) can navigate them.
  • Conflicting ways of framing and expressing values and priorities make distributed consensus hard to reach, or even to recognize when it already exists.

I won’t try to make this case in detail. That would be a book, and I’m not the person qualified to write it. But whether your priorities are similar to mine or very different, you’ve probably experienced it for yourself.

Having lived in a society struggling with these two impediments affected my reaction to learning about the new AI: I was struck by what seemed — and still seems — to be its potential promise for making headway against them:

  • Regarding the first, it has the ability to exploit astronomical data volumes in relation to individual considerations. (Even conventional AI can do this, as you know from watching the social media platforms help themselves to significant chunks of the economy in return for their ability to deliver personalized advertising.)
  • Regarding the second, it has a universe of human values built in, and the ability to engage in fluent dialogue about them.

Of course, I have no idea — nobody does — about how to turn these qualities of the new AI into a vehicle for human empowerment. But the raw potential appears to exist there, and I haven’t seen it anywhere else. We need to figure it out, because the alternative is our disastrous current trajectory.

But what about the, y’know, extinction thing?

Let’s look again at what I asked above:

In the case of the “optimistic doomers” mentioned in the 2023 Expert Survey (above), I believe the only explanation for their responses is that they believe the risks can be mitigated, and they’re urging us to make sure that happens.
Are they right about that? That the risks can be mitigated? How sure do you have to be when the price of being wrong might be human extinction?

When people contrast the benefits of AI with its risks, what they say can seem surreal. You tend to hear benefits like accelerated discovery of new drugs, automated tutoring for students and other learners, better management decision-making, and automated assistance for scientists and engineers. These are real, they’re exciting, and they’re only a few examples among many. But… but are you really putting them up against a risk of the literal destruction of the human race?

My answer — and my hunch is it’s shared by the optimistic doomers in general, whether they know how to articulate it or not— is that the risk from AI is only part of the much larger dynamic I discussed in the preceding section. It does no good to rein in AI if the rest of the horsemen continue to bear down on us. But if AI can help mortals assemble our power, we make progress on all fronts at once.

(For what it’s worth, I also think the p(doom) estimates expressed in the Expert Survey are way too high. I‘m not sure what my own would be, but certainly less than 1%. It’s too much to defend this here and now; maybe that’s a future post!)

What should we be doing now?

I don’t have a grand plan for how we should use the new AI to empower us as mortals. Maybe there won’t be a grand plan; maybe it will be a host of efforts that put down one brick at a time. (For an example of one person trying to lay one brick, see the paragraph on Alice Hunsberger below, under If you want more to read…)

A few initial thoughts come to mind.

Support those working effectively for safety. I’ve said that I believe, and I think most experts believe, that AI’s risks can be mitigated. But that doesn’t mean they’ll mitigate themselves; we have to make it happen. I’m disappointed and a little shocked to realize that not only do I not have any suggestions for you here, I haven’t even been doing anything myself. I will fix both things. (I knew writing would make me a better person!)

If and when you have the opportunity to interact with political officials, members of the media, or activists, even in such a simple way as by answering a survey, make sure they know you prioritize AI safety.

Think and talk about how the new AI can work to empower mortals. Where do you see possibilities for the new AI to be involved in work you’re already involved in, especially going forward as it rapidly improves? Where do you see the two impediments holding mortals back? Does that suggest ways AI might help? What do people around you think? If you’re a newbie, who is using AI around you? What are they doing, and what ideas and needs can you share with them? If you’re a developer, how do you see the new AI empowering ordinary people? What can you build? The more discussion we have around this, involving — especially! — those of us who will never touch AI tools ourselves, the more good we can do. I sincerely hope a bit of this discussion can occur here.

The AI companies are incentivized to suppress output that gets anywhere near political opinion or other topics regarded as sensitive. This works against mortal empowerment. If and when you have the opportunity, make it known that you prioritize the LLM version of free thought and expression: wide-ranging and exploratory output even at some (not unlimited) risk of giving offense.

Demand open-source AI. This is the one immortal aim to rule them all. People have legitimate questions about open-source AI risks: security/privacy, misuse, bias/representation, governance, and intellectual property rights all get more complicated (though also more accessible) in the open-source arena. These are real issues and need to be addressed. Nonetheless, the overall question is non-negotiable. No risk is so great that it should make mortals okay with the new AI being kept under lock and key by a handful of private (or even public) gatekeepers.

A bare beginning. But I look forward to developing these and other immortal aims — together with you. Onward!

If you want more to read…

The Center for AI Safety, organizers of the extinction risk statement referenced throughout this post, have An Overview of Catastrophic AI Risks I would recommend to anyone, though not right at bedtime. It’s well-written, accessible, thorough, and realistic. If you read it, you can consider yourself very well informed on the subject of AI’s longer-term, existential risks. Note that this omits issues that are more localized or incremental in scope, but occurring today and also critically important: bias and representation, equity, privacy, job market disruption, and carbon footprint to name a few. (We’ll talk about all these in future AI for Mortals posts.) Bear in mind also, lest you crawl under your bed never to emerge, that they are collecting all the worst-case scenarios in one place with little honey to help the medicine go down. They’ve done an admirable job of it, but remember that similar catastrophic risk profiles could be assembled for many other activities we’ve engaged in for a long time, and lived to tell the tale. Substitute books or pharmaceuticals for AI in some of their scenarios; you’ll see what I mean. “Similarly, corporations could exploit books to manipulate consumers and influence politics.”

I’m personally a lot less on board with the Future of Life Institute’s pause letter, but here it is if you’d like to take a look: https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

Alice Hunsberger is a veteran of the content moderation wars who is now writing a newsletter called Trust & Safety Insider. She’s written a post called Content policy is basically astrology? in two small parts. Here are part 1 and part 2. It’s a fascinating example of one person thinking about how to use the new AI for mortal empowerment in one area, in light of all messy reality and a variety of anticipated consequences — some welcome, some not.

Here’s Andrew Marantz, in The New Yorker, with Among the A.I. Doomsayers (metered paywall), which is fun and informative, but also displays what I consider an unfortunate and unnecessarily patronizing attitude toward some people who are a lot smarter about AI than he is, and a lot less silly than he paints them. It’s currently fashionable to dismiss doomer concerns either as distractions from more immediate safety issues, or, as Marantz puts it, getting “hung up on elaborate sci-fi-inflected hypotheticals”. As I’ve said, I have differences of my own with the hardest-core doomers, but the current eyerolls make me want to rush to their defense. These critiques never seem to come with any actual counterarguments. Those doing the shushing tend to be the same people who want us to “listen to the science” in relation to the perils of climate change. They’re right about that, and they’d be wise to adopt the same attitude here. In particular, “elaborate” and “sci-fi-inflected” are adjectives that perfectly describe LLMs’ actual behaviors. We should be hypothesizing about them just as hard as we possibly can.


This article originally appeared in AI for Mortals under a Creative Commons BY-ND license. Some rights reserved.