Subject:Elites and AI
------
From:Luke Muehlhauser<>
Date: Mon, Jul 8, 2013 at 6:40 PM
To: Jonah Sinick <>

Hi Jonah,

This is the email thread — with the subject line "Elites and AI" — that I hope to publish in full with the article this email thread will be used to produce.

As we've discussed, I want to further investigate the question I raised inWill the world's elites navigate the creation of AI just fine?andElites and AI: Stated Opinions.

I'd like to continue the investigation by looking at somewhat-analogous historical cases. As we discussed in person, for now let's focus on historical cases that are analogous on several of the following dimensions:

  1. AI may become a major threat in a somewhat unpredictable time.
  2. AI may become a threat when the world has very limited experience with it.
  3. A good outcome with AI may require solving a difficult global coordination problem.
  4. Preparing for the AI threat adequately may require lots of careful work in advance.
  5. Elites have strong personal incentives to solve the AI problem.
  6. A bad outcome with AI would be a global disaster, a good outcome with AI would have global humanitarian benefit.

Here are some (relatively recent) historical cases to consider in the context of the reference classes suggested by the above list:

  1. 2008 financial crisis.
  2. Climate change.
  3. Iraq War.
  4. Leaders being deposed or assassinated.
  5. Eradication of smallpox.
  6. Nuclear proliferation.
  7. Recombinant DNA.
  8. Nanotech.
  9. Near earth objects.
  10. Chloroflourocarbons.
  11. Risks to critical infrastructure from solar flares.
  12. Cyberterrorism.
  13. Swine flu.

Luke Muehlhauser

Executive Director

------
From:Jonah Sinick<>
Date: Tue, Jul 9, 2013 at 1:12 PM
To: Luke Muehlhauser <>
Hi Luke,

Regarding

I'd like to continue the investigation by looking at somewhat-analogous historical cases. As we discussed in person, for now let's focus on historical cases that are analogous on several of the following dimensions:

After the conversation, we decided that what we're trying to do is more subtle than this:

  • Some of the criteria, (when taken in isolation), arenecessaryconditions for inclusion of historical cases where the world's elitesdidsolve a problem.
  • Some of the criteria, (when taken in isolation),aresufficientconditions for the inclusion of historical cases where the the world's elitesdid notsolve a problem.
  • Some of the criteria, (when taken in isolation), arenecessaryconditions for inclusion of historical cases where the world's elitesdid notsolve a problem.
  • Some of the criteria, (when taken in isolation), aresufficientconditions for inclusion of historical cases where the world's elitesdid notsolve a problem.

The criteria that are necessary are not necessarily sufficient, and vice versa.

Addressing the criteria in turn:

#1 AI may become a major threat in a somewhat unpredictable time.

Wedon'twant to count instances where the world's elitesdidsuccessfully address a major threat thatdidarise at a predictable timeas evidencein favorof the world's elites successfully navigating the creation of AI, based on this criterion alone.

But wedowant to considerinstances where the world's elitesdidnotsuccessfully address a major threatdespiteit having arisen at a predictable timeas evidenceagainstthe world's elites successfully navigating the creation of AI.

#2 AI may become a threat when the world has very limited experience with it.

Wedon'twant to count instances where the world's elitesdidsuccessfully address a threat that the worlddidhave experience with as evidencein favorof the world's elites successfully navigating the creation of AI, based on this criterion alone.

But wedowant to considerinstances where the world's elitesdidnotsuccessfully address a threatdespitethe world having experience with itas evidenceagainstthe world's elites successfully navigating the creation of AI.

#3 A good outcome with AI may require solving a difficult global coordination problem.

Wedon'twant to count instances where the world's elitesdidsuccessfully address a threat thatdid notrequire solving a difficult global coordination problem to address asevidencein favorof the world's elites successfully navigating the creation of AI, based on this criterion alone.

But wedowant to considerinstances where the world's elitesdidnotsuccessfully address a threatdespiteit not requiring solving a difficult coordination problem to address as evidenceagainstthe world's elites successfully navigating the creation of AI.

#4 Preparing for the AI threat adequately may require lots of careful work in advance.

Wedon'twant to count instances where the world's elitesdidsuccessfully address a threat thatdid notrequire lots of careful work in advance to address asevidencein favorof the world's elites successfully navigating the creation of AI, based on this criterion alone.

But wedowant to considerinstances where the world's elitesdidnotsuccessfully address a threatdespitedoing so not requiring a lot of careful work in advance as evidenceagainstthe world's elites successfully navigating the creation of AI.

#5 Elites have strong personal incentives to solve the AI problem.

Wedon'twant to count instances where the world's elitesdid notsuccessfully solve a problem that the world'selitesdid nothave strong personal incentives to solve as evidenceagainstthe world's elites successfully navigating the creation of AI, based on this criterion alone.

But wedowant to considerinstances where the world's elitesdidsuccessfully solve a problemdespitenot having strong personal incentives to solve it as evidencein favorofthe world's elites successfully navigating the creation of AI.

#6 A bad outcome with AI would be a global disaster, a good outcome with AI would have global humanitarian benefit.

Wedon'twant to count instances where the world's elitesdid notsuccessfully address a threat, when the threat wouldnotbe a global disaster asevidenceagainstthe world's elites successfully navigating the creation of AI, based on this criterion alone.

But wedowant to count instances where the world's elitesdidsuccessfully address a threat, when the threat wouldnotbe a global disaster, asevidencein favorofthe world's elites successfully navigating the creation of AI.

------
From:Jonah Sinick<>
Date: Wed, Jul 10, 2013 at 6:28 PM
To: Luke Muehlhauser <>

I spent 3 hours reading about smallpox eradication.

How smallpox eradication does or doesn't fit the criteria

  • #1Smallpox didn't arrive at an unpredictable time. On the contrary, it had already arrived before the eradication campaign.
  • #2The world didn't have experience eradicating a disease before smallpox was eradicated, but a number of nations had eliminated smallpox.
  • #3Smallpox eradication required solving a difficult global coordination problem, but in a way disanalogous to AI safety (see below).
  • #4Preparing for smallpox eradication required effort in advance in some sense, but the effort had mostly already been exerted before the campaign was announced.
  • #5Nations without smallpox had incentive to eradicate smallpox so that they didn't have to spend money to immunize citizens so that the virus would not be (re)-introduced to their countries. For example, in 1968, the United States spent about $100 million on routine smallpox vaccinations.
  • #6Smallpox can be thought of as a global disaster: in 1966 about 2 million people died of smallpox each year.

Main takeways

My impression is that the factors that enabled smallpox eradication are:

  • Eradication efforts seem to have been in basically everybody's interest.
    If I read the numbers right, the United States spent 1/3rd as much moneyper yearon vaccinating US citizens to prevent thereintroductionof smallpox asthe total cost of the entireeradication campaign.
    Smallpox caused death and suffering, constituted a burden on health care systems, and reduced human productivity.
  • A few technological innovations (the freeze-dried vaccine and thebifurcated needle) that had been developed without a specific view toward eradication.
  • The disease transmission being was easy to disrupt
  • Actors had a lot of experience with elimination at the national level.

I don't think that the successful eradication of smallpox stands out as being especially relevant to the question of whether the world's elites will deal with AI well.

Some more detailed notes below.

Notes on smallpox eradication

These mostly froma documentpublished by theCenter for Global Development:

  • The first vaccine was created in 1798, an improved vaccine was created in the 1920's, and vaccines that didn't require cold storage were developed in the 1950's.
  • Smallpox was unusually well suited to eradication:
    (i) It wasn't transmitted by insects or animals
    (ii) It was straightforward to diagnose
    (iii) There was a long time lag between getting infected and becoming infectious
    (iv) The disease was sufficiently debilitating so that infectious people had relatively little contact with others
    (v) The vaccine didn't need to be refrigerated
    (vi) Vaccination prevented infection for 10+ years.
  • As late as 1966, there were 10-15 million cases of smallpox a year, and 1.5-2 million people died per year.
  • The campaign started in 1959.
    The World HealthOrganization(WHO) didn't make it a high priority.
    There was initially insufficient financial support. The program had trouble establishing a proven track record to bolster support, because case reporting was so low that it was unclear where smallpoxprevalencewas dropping. The beginning of the campaign coincided with the failure of the malaria eradication campaign, and potential funders had an unfavorable impression of eradication campaigns.
  • In 1964, the WHO set up a Smallpox Eradication Unit , with its own staff and budget, and made smallpox eradication one of its major objectives. The United States began providing more support, and this was a key factor in the program's development.
    The WHO began supplying vaccine injectors, specimen collecting kits and training aids to countries that requested them.
    The eradication campaign began using thebifurcated needle, which had been developed in 1961. This needle was very cheap, reusable, and easy to use.
  • Between 1967-1973, progress was very rapid: the number of endemic countries dropped from 31 to 5.
  • There were some issues of countries sliding back into endemicity, but they were quickly resolved by strengthened efforts.
  • Toward the end of the campaign, the focus shifted from national vaccination efforts to actively seeking out cases and containing outbreaks. 10000's of health workers worked in Ethiopia (which had a civil war) to stop smallpox transmission.
  • The annual cost of smallpox damage and prevention was about $1.35 billion, and the total cost of the eradication effort was about $300 million.
  • An positive unanticipated consequence of the eradication campaign was mainstreaming of routine vaccination in the developed world. Between the time the campaign started and 1990, routine vaccination in the developing world increased from 5% to 80%.

------
From:Luke Muehlhauser<>
Date: Wed, Jul 10, 2013 at 6:38 PM
To: Jonah Sinick <>
A few numbers stand out: How could the entire eradication effort cost $300 million, if the Ethopian effort alone required tens of thousands of health workers, and if the US vaccinations by themselves sometimes cost about $100 million annually?

Luke

------
From:Jonah Sinick<>
Date: Wed, Jul 10, 2013 at 8:14 PM
To: Luke Muehlhauser <>
On the first point: the CDC document says vaccination cost $0.10 per person in endemic areas (in ~1960) and cost $6.50 per person in the United States (in 1968). I don't know why the differential in cost is so high. When I first read it, I assumed that it's because of the differential in cost of labor, but thinking it over again, I find it hard to imagine how that could give rise to such a big differential in cost. I can investigate further if you'd like.

On the second point: the figures are unadjusted for inflation, and $100 in 1973 was worth about $500 today. I believe that people in Ethiopia were living on an amount on the order of $100/year. This is an underestimate for the cost of a health worker, but using it as an input into an order of magnitude calculation gives

(10k people)*(annual wage) = 1 million dollars

------
From:Luke Muehlhauser<>
Date: Wed, Jul 10, 2013 at 8:16 PM
To: Jonah Sinick <>
Okay thanks. I think this is probably deep enough into smallpox for our purposes.

Luke

------
From:Jonah Sinick<>
Date: Fri, Jul 19, 2013 at 7:22 PM
To: Luke Muehlhauser <>
Hi Luke,

Responding on the point ofrisks to critical infrastructure from solar flares:

I found adocumenton theOECD risk management websiteabout geomagnetic storms.

I think that the negative expected value coming from the risk is sufficiently small so that it shouldn't be thought of as being a potential global disaster.

There are three historical reference points: a 410 nTs/min geomagnetic storm from 2003, a 640 nTs/min storm in 1989, and a 1760 nTs/min storm from 1859 (pg. 9).

The theoretically derived frequencies of storms of these magnitudes are ~ 1/10 years, ~ 1/50 years and ~1/100k years respectively (pg 19).

The estimated costs of storms storms of the latter two magnitudes are ~$11 billion and ~$3 trillion (pg. 13).

The expected losses coming from storms of the latter two magnitudes are ~$200 million/year and ~ $30 million/year respectively.

Even if one is suspicious of the 1/100k years frequency for the most severe ones and uses a ~1/1k year frequency instead, one only gets negative expected value of $3 billion/year, which is large enough so that it should be addressed, but still a small perturbation on the economy as a whole.

I think that the problem is too small to be in the same reference class as AI risk. As for how well it's being addressed, the report discusses measures that are in place and room for improvement on pages 40 - 47, and the picture is mixed.

Jonah

------
From:Jonah Sinick<>
Date: Mon, Jul 22, 2013 at 3:08 PM
To: Luke Muehlhauser <>
Responding on climate change:

The bookThe Discovery of Global Warminghas an associated websitehere. It offers a summary of the history of climate change sciencehere.

Some points:

  • People started to see climate change as a potential problem in the early 1970s. However, there was ambiguity as to whether human activity was systematically causing warming (because of carbon emissions) or cooling (because of smog particles).
  • Some scientists thought that there was systematic anthropogenic warming in the late 1970s, but they had relatively little visibility.
  • The first IPCC was published in 1990, and stated that there's substantial anthropogenic global warming coming from greenhouse gases.
  • In the coming years there was a lot of research, and by 2001 there was a very strong consensus.

One would have to deep dive

(i) How strong the consensus was as a function of time during the 1990s.

(ii) The history of views on the size and sign of the humanitarian impact of climate change.

to develop a clear sense for how quick society has been to address expected damage from climate change.

For the purposes of the present project, I think that what would be most interesting is to investigate the history of discovery and response to thetail risk. My intuition is that rational historical estimates for median case negative humanitarian value are too low for median case global warming risk to be in the same reference class about AI risk, but that the negative expected value coming from the tail might be large enough to put it in the same reference class.

So to start, I'm going to look at Posner's book (which has other relevant information). After doing so, I might return to the expected value other than that coming from the extreme tail.

------
From:Luke Muehlhauser<>
Date: Mon, Jul 22, 2013 at 3:29 PM
To: Jonah Sinick <>
Ok. Please also send your thoughts on the negative EV of geomagnetic storms to some of the people who think it's a really big deal, to see whether they have a rebuttal.

------
From:Jonah Sinick<>
Date: Tue, Jul 23, 2013 at 12:29 PM
To: Luke Muehlhauser <>

Responding on the point of cyberwarfare:

  • Last year, the Department of Defense published a report titledResilient Military Systems and the Advanced Cyber Threatwhich says
    The Task Force believes that the integrated impact of a cyber attack has the potential of existential consequence. While the manifestation of a nuclear and cyber attack are very different, in the end, the existential impact to the United States is the same.
  • In 2011, Admiral Michael McMullen (who was the highest ranking US military officer at the time)said:
    The single biggest existential threat that’s out there, I think, is cyber…Cyber actually, more than theoretically, can attack our infrastructure, our financial systems…There are countries who are very good at it. More than anything else that is the long-term threat that really keeps me awake.
  • In Chapter 4 ofCyber War: The Next Threat to National Security and What to Do About It,formerNational Coordinator for Security, Infrastructure Protection, and Counter-terrorismRichard Clarkewrote:
    Why had Clinton, Bush, and then Obama failed to deal successfully with the problem posed by America’s private-sector vulnerability to cyber war?
    apparently suggesting that the problem is neglected.
  • Some people have suggested that the military and its contractors are motivated to overstate the risks from cyberwarfare in order to justify large cybersecurity budgets, for example,here.
  • Jason Healey(who's the director of an initiative at a national security think tank) wrote anarticlein US News responding to the Department of Defense report and McMullen, saying that the risk has been overstated, and is far lower than that of nuclear war, but that he anticipate that America's electric grid will be integrated with the internet in the future, and that this could make the risk of cyber attacks much worse.
  • TheOrganisation for Economic Co-operation and Development(OECD) publisheda reportsaying
    The authors have concluded that very few single cyber-related events have the capacity tocause a global shock
    Catastrophic single cyber-related events could include: successful attack on one of the underlying technical protocols upon which the Internet depends, such as the Border Gateway Protocol which determines routing between Internet Service Providers and a very large-scale solar flare which physically destroys key communications components such as satellites, cellular base stations and switches.
    For the remainder of likely breaches of cybsersecurity such as malware, distributed denial of service, espionage, and the actions of criminals, recreational hackers and hacktivists, most events will be both relatively localised and short-term in impact.
  • The OECD report also briefly mentionselectromagnetic pulsesas a cybersecurity risk:
    An electro-magnetic pulse (EMP) is a burst of high-energy radiation sufficiently strong to create a powerful voltage surge that would destroy significant number of computer chips, rendering the machines dependent on them useless. It is one of the few forms of remote cyber attack that causes direct permanent damage. The best-known trigger for EMP is with a high-latitude nuclear explosion and was first noticed in detail in 1962 during the Starfish Prime nuclear tests in the Pacific. Studies have investigated the possible effects on the United States power grid. (Oak Ridge National Laboratory, 2010).
    I'vecome across the claimthat a nuclear weapon detonated at high altitude over America could create an EMP.
    I'll investigate the issue of nuclear weapons causing EMPs separately.

I'm not sure where to proceed with the investigation of cyberwarfare: different people have very different accounts of how big a threat it is. Maybe I should try contacting people who have subject matter knowledge?