ORDERS OF MAGNITUDE
For eighty years humans have lived under the threat of a nuclear exchange between superpowers, intended or unintended. Despite the elapsed time, many still consider this threat to be at the top of their list of global concerns and something that continues to keep them up at night — and much more so as of late. Others have never learned or largely forgotten that their parents, grandparents, and great-grandparents rigged the whole of the northern hemisphere with these highest of explosives and happily go about their lives never considering it.
Regardless of where folks sit on that spectrum, when presented with the facts (their numbers and yield; who has them; their astonishingly fast modern delivery systems and, thus, curt decision-making time; the ease with which accidents can and have happened; the ramifications of a many-siloed riposte; and on) there seems to be a broad consensus that, in a world inherently permeated by perils, we’ve made for ourselves an additional abominably intractable one that should not even exist. Maddeningly too, this problem is one we refuse to solve, preferring instead to gift this ghastly heirloom to every subsequent generation. And it is from this place we may be seeing the emergence of the first flickerings of artificial intelligence. Though just about everyone agrees we aren’t living with the initial bloom of inorganic sentience, some in the know are expressing concerns that we may be far further along and more quickly than anticipated on the journey to that pinnacle (or terminal) feat of science and engineering: artificial general superintelligence.
For a taste of how different this time we are living in is from any other, it’s helpful to notice the quickening pace of change. To do so, folks often reference Moore’s law: that it takes about 18 months to two years of technological advancement to double the number of transistors we can pack into an integrated circuit (thereby improving the speed and capacity of our devices on a predictable exponential curve.) But what matters here is that it took 50 years of science and engineering for Moore’s law to gift us an increase in transistor density of 10 orders of magnitude. While that is a big change and fast, and a technological leap that has fundamentally altered our world in so very many ways, the processing power used for artificial intelligence training has increased by 10 orders of magnitude in only a decade. So in 2013, the cutting edge neural networks ran on one or two petaFLOPS (two quadrillion, or a billion million, or 2x10^15 operations.) Here in 2023 the cutting edge AI runs on five billion times that processing power. That makes the incredible, head-spinning rate of change of Moore’s law appear positively sluggish.
So acknowledging this and also what feels like a broad celebration for the coming revolution in, well, everything (and, of course, the eventual birth of what will be, from all perspectives and for all purposes, a god) — paired with relatively few voices out there calling for folks to appreciate the challenges inherent in aligning our own interests (and those of the rest of life) with the interest of a non-biological superintelligence — I thought I would compare nukes with what most folks imagine an artificial general superintelligence will be.
WHAT MAKES AI SO DIFFERENT FROM NUKES?
• Anyone is able to visit Los Alamos, Semipalatinsk, or Hiroshima
• Nuclear devices are not inscrutable black-boxes: physicists understand the workings of nuclear weapons
• Folks never wildly overestimate their predictive powers regarding nuclear weapons and their impacts
• Anyone is able to calculate in advance the trajectory of an ICBM and the yield of an atomic warhead before building or launching one
• When Edward Teller speculated that the heat from a nuclear blast might fuse hydrogen atoms into helium, as happens on the sun, unleashing a chain reaction igniting the world’s oceans and atmosphere and extinguishing all life, and Arthur Compton shared those concerns, this was taken seriously and resulted in a consilience of different independent calculations convincing experts this would not happen
• Nukes never have surprising emergent properties or states
• We can be confident nukes are not self-aware and do not have complex inner lives — because they are not composed of gargantuan matrices of perfectly opaque unfathomableness — and so we don’t have to worry about these missiles being individuals and the ethics therein when we compose them, set them off, or decommission them
• Nuclear weapons do not have minds and those minds they don’t have do not operate at a million times the speed and with a million times the capacity of individual biological minds
• Nukes are incapable of deception or manipulation or boredom. And, conveniently, they never develop narcissism or psychosis
• No one ever daydreamed about being read bedtime stories by, getting homework help from, or finding companionship in a Soviet-era hydrogen bomb
• No one ever considered asking the Tsar Bomba to solve nuclear proliferation, find a cure for radiation poisoning, or what to do with nuclear waste
• Humanity understands the extreme dangers inherent in nuclear weapons and as such top scientists, researchers, policy-makers, and politicians have open conversations about those dangers and have been for generations
• When folks talk about a nuclear exchange ushering in, with a flash, the end to the industrial age, agriculture, and above ground living for much of civilization, folks don’t imagine that to be a good thing and no one is ever heard calling for it
• Even an all-out nuclear exchange between global powers is unlikely to extinguish humanity, never mind all terrestrial and most aquatic life
• The facilities and materials for nuclear weapons production are relatively easy to identify and locate. (No one could ever write code making everything connected to the internet suddenly a contributing component to Iran’s uranium enrichment project. And a North Korean Hwasong-17 is only infinitely less likely to do so)
• There are no self-improving nuclear weapons; so, upon launch they never increase speed or yield a thousand times between launch and impact (minutes or hours in organic time but two whole eternities for intelligent machines)
• Nuclear weapons do not self-replicate; meaning that when launched they never multiply 10,000 times for every minute of flight, with each new one taking on a new target and trajectory
• There aren’t dozens of companies backed by venture capitalists and private corporations attempting to generate privately owned nuclear arsenals
• Nations and whole continents have spelled out plans for dealing with nuclear armaments and nuclear war. Flawed as they may be, none of those plans include bullshit on the order of: “All will be well so long as there’s rapid proliferation of open-sourced nuclear weapons technology online”
So what does the above mean? Well, based on my reading of Bostrom, Tegmark, and Yudkowsky, the unknowns and certain threats associated with a superintelligence carry with them concerns orders of magnitude beyond those related to nuclear weapons. And yet there is a potent lack of interest or bother. In fact, recent AI news seems to have landed something like an updated spec sheet for the next iPhone. Still, I’m convinced the extinction scenarios presented are so real that they should take priority far ahead of any concerns to prevent a global nuclear exchange or even anything climate change-related. It seems like we’re in a place where computer processors should be tracked in the same way fissile materials are. Low-level GPUs are the new unenriched plutonium. Large clusters of high-performing GPUs, or just the desire to acquire as much, must be seen as more threatening than a rogue state establishing an entire network of enrichment plants, rocket factories, mobile launch platforms, and targeting systems. It would have to be seen as a direct threat not to civilization but to the faintly flickering flame that is life on Earth. And, therefore, the need to reduce the risk of future large-scale AI training operations, for example, like those that gave rise to GPT-4, is such that the world must be fully willing to deliver a preemptive strike against anyone compiling such massive GPU clusters, even if that means running the risk of a global war or a world altering nuclear exchange. (And, of course, as the technology improves, this same threat is made possible by fewer and fewer resources.)
All this may sound dramatic, over-the-top, however, as Yudkowsky points out, even with 60 years of science fiction tackling these ideas and decades of scholarship dealing with the emergent realities we are not even at a place where we know what questions need asking. That should tell everyone a lot about the problem. And that is true while we’ve simultaneously made huge strides in obscuring and complexifying so much of what’s going on. (For a good example of this, they trained GPT-4 on sources including discussion of concepts like intelligence, perception, consciousness, qualia and the like. This, of course, makes the essential task of interrogating GPT-4 about its own experience near-impossible...) So it’s not just that no one has yet conceived of a method of sorting out whether a very stupid AI is lying to you or has ulterior motives or is self-aware but that, seemingly unavoidably, we will soon find ourselves facing a technology that both surpasses the intelligence of all humanity while also deploying and using those resources at a rate that makes, from its perspective, our quickest possible response (even answering a 'yes' or 'no' question) so slow as to be of no interest to it. By definition, it could never be so dimly intelligent or lacking in curiosity to wait out the eons between a query and a response.
Like, imagine establishing a phone call with an alien race (one that we know to be extraordinarily different from our own or any other known form of life) in which each syllable takes a period 40,000 years to transmit. In this scenario, from our perspective, it might take the entire lifetime of our own species just to establish that we are in fact communicating with an alien intelligence and then how to do so effectively. This, sadly, would not be something any human or any human institutions or even entire civilizations could afford to concern themselves with. Could our species even contend with it in any way at all? Each syllable would be something like a fossil and, perhaps, temporarily placed in museum — but one for which the methods of preservation and even the language of those who did the preserving go extinct a dozen times before the arrival of just the next alien syllable.
You see, this problem is very much unlike how it is often framed: as a game of chess against an opponent who is very much better than you at chess. And, as Yudkowsky suggests, it doesn’t look like this is a problem we need to quickly get a handle on so much as one we need to find a way of undoing and halting entirely — but, not actually to prevent what now seems inevitable, only to buy us just a little more time.
UPDATE (September 2024):
That Alien Message: "This video is an adaptation of 'That Alien Message', a short story published by Eliezer Yudkowsky in 2008."
Commenti