174

1 standing to BINA48.

2 Now as Judge Dutton and I have disagreed on

3 the ultimate outcome, we have agreed that this

4 issue should be certified to the next -- to the

5 appellate court for its ultimate decision.

6 DR. ROTHBLATT: Thank you very, very much,

7 Judge Silverman.

8 And thank you, Judge Dutton.

9 And we now will have a 20-minute break

10 upstairs.

11 Thank you very much, BINA48.

12 DR. ROTHBLATT: All righty. We are moving

13 into sort of the penultimate section of our program

14 and really appreciate everybody's alertness in

15 coming up and down stairs. We are all getting our

16 Stair Master practice for today. Next I'd like

17 to introduce Peter Voss, the founder and chief executive of

18 Adaptive AI, Inc. Peter is one of the worlds greatest

19 thinkers and conceptualizers of artificial and general

20 intelligence. And we are extremely fortunate to have you

21 with us, Peter, today. Thank you so much for coming here.

22 He will be speaking to us on the subject of equal and moral

23 complexities of artificial intelligence and equal and moral

24 complexities of artificial general intelligence.

25 Peter. PETER VOSS: Thank you, Martine. Thank you

175

1 very much for inviting me and giving me the

2 opportunity to talk about this.

3 As you'll see from my presentation I actually

4 believe that the issues are not only extremely

5 important and weighty, but also I will argue that

6 they are much more urgent and much more imminent

7 than many people think.

8 I'm going to have to run through things very,

9 very quickly while making a number of very strong

10 statements controversial statements that I will

11 not have the time to support. Obviously, I'll give

12 some references at the end and I'll be happy to

13 discuss and expand on them but I do want to cover

14 quite lot of ground.

15 Roughly I'm going to be talking about what

16 AGI is, artificial general intelligence, and how

17 that differs from traditional AI or what people

18 normally understand what AI is and the key reason for

19 that. This is important. Then I want to

20 address what are some of the key uncertainties and

21 the key issues that we need to think about on this.

22 I'd like to look at whether AGI should be seen as

23 something that will save us from various threats

24 that face humanity or whether they in fact, AGI, is

25 a danger to us. I'll then talk about moral

176

1 implications of AGI and briefly about legal issues.

2 First of all, what is AGI, artificial general

3 intelligence? It's kind of a forgotten science, or

4 forgotten technology. Originally it was really all

5 about human level intelligence. What

6 the man in the street would think of when you talk

7 about AI, say the movie AI, basically

8 a machine that has intelligence like a human. But

9 AI research itself actually doesn't deal with that,

10 only a very small subsection of it deals with that

11 kind of real AI. And some of the differences are

12 that what is really important in intelligence is

13 the ability to acquire knowledge and skills. It's

14 something dynamic, something ongoing akin to children

15 learning. It's not having knowledge per se, we have

16 dictionaries that contain a lot of knowledge but

17 they aren't intelligent. So it's not a data base

18 of a lot of knowledge but being able to acquire new

19 knowledge. That's the key to intelligence, to real

20 intelligence. So it's acquisition via learning

21 whereas in conventional AI the knowledge and skills

22 are programmed in. In AGI they are acquired

23 through learning rather than programming.

24 This entails using abstraction and

26 context. "Abstraction" meaning the ability to

177

1 generalize. So we learn our lessons once or twice

2 and then we generalize. We can apply it in very

3 different situations. We also learn that things

4 are contextual. If somebody gives you

5 a rule that you should never hurt

6 another person, we know that that's within the

7 context of not being attacked.

8 Conventional AI is very poor at

9 generalizing - it's usually written for a specific

10 domain. Domain specific, rule based and

11 concrete. That's why systems, traditional computer

12 systems -- the ones we are using now also tend to be

13 very stupid, very brittle. You know, you sort of

14 think, oh, the program should know I don't know

15 what to do – or it shouldn't just fall

16 over itself. It always doesn't know what it's doing.

17 So that's the different between general ability,

18 being able to learn any kind of task potentially,

19 again, like children can, or being programmed to do

20 a very specific task and being very concrete bound.

21 The other thing is ongoing cumulative

22 adaptive grounded and self directed learning. So

23 it's a mouthful but, again, what it boils down to,

24 the way children or even animals learn, interaction

25 with the environment, you learn your lessons. As

178

1 you go along you become smarter, you become better

2 as time goes on. So you learn from experience. So

3 that's what AGI is. So you can see it's very

4 different from the kind of programs we are actually

5 using and very few people are actually working on

6 AGI. Then there are a whole bunch of reasons for

7 that but one of them is the field of AI was really

8 overly ambitious 50 years ago. They thought they

9 could crack this in five or 10 years, and it made

10 promises, that they haven't been able to live up to.

11 And so now it's basically a swear word, and

12 very few people will even touch the subject. But

13 the implications of AGI is that you have human

14 level of learning and understanding machines that

15 learn adaptively and contextually.

16 What also follows from it, and this is a

17 controversial point, I'll just say it, they will be

18 self-aware. They will have a self-concept.

19 They'll improve. And at a certain point you reach what

20 in developmental psychology and education is called

21 “ready to learn”. This will be a certain level of

22 competence that they achieve, the ability to learn,

23 and the background knowledge that will allow them to

24 hit the books and learn by

25 themselves. So that's kind of a threshold. Once

179

1 you've reached that threshold the system will be

2 able to improve itself. The stronger version of

3 that is “Seed AI”.

4 What I mean here is that

5 at some point the program will be

6 smart enough to become a programmer, to become

7 an AI psychologist, to understand it's own workings

8 and to be able to improve itself. The same way

9 that we can -- we -- you know, as we grow up we

10 learn more about ourselves, we learn how to improve

11 ourselves. Except we don't have the blueprints to

12 our design. AGI will have the blueprints to its

13 design and it will likely be a very damned

14 competent programmer or will become a very good

15 program very soon. So you reach that threshold and

16 then it will improve dramatically.

17 Now AGI, once we have that level of

18 capability, will also be able to augment our

19 ability but it will be very difficult to actually

20 integrate it with our wetware. That's a difficult

21 problem to mess around. Never mind FDA problems

22 and so on, you know, that's going to take a lot

23 more effort and going to come later. So some of

24 the key questions that arise from this now, from

25 human level AI, AGI, how soon will this happen?

180

1 I'm telling you that the pieces of the puzzle are

2 out there. No fundamental technology still needs

3 to be invented -- a strong statement. And I'm

4 convinced that this will happen in less than 10

5 years. In fact, our own company is working on it

6 and our own projections and plans are quite a bit

7 less than that. They're more like a three to six

8 year time frame.

9 How powerful will it be? Are there hard

10 limits to intelligence? There may be hard limits

11 to intelligence at some level. We don't really

12 know that but it will be very powerful. It will be

13 substantially more capable -- more capable than

14 humans in pure cognitive tasks and reasoning tasks

15 and problem solving tasks. So it will be very,

16 very powerful.

17 Will there be a hard take-off

18 that's a scenario of once you reach

19 that basic take-off, that ready to learn stage, the

20 Seed AI state, some people speculate that sort of

21 within 24 hours a system will self-improve so much

22 that, you know, the singularity will happen and

23 that's, I guess, sort of the one extreme of a hard

24 take-off. Other people believe it will take 20,

25 30, 40, 50 years or something for AI’s to develop

181

1 and become smarter and smarter.

2 My own position is that it will be a firm

3 take-off. We are talking months rather than years.

4 Certainly not tens of years. There will be certain

5 practical limits on how fast, you know, the machine

6 can be improved, hardware can be implemented and

7 improved and systems can be redesigned. But

8 essentially a very short period in terms of a

9 chance for society to adapt to it and embrace it.

10 Certainly much faster than, I believe, the legal

11 system can move, you know, or for society as a

12 whole can adapt.

13 Well, can we put the genie back in the

14 bottle? The answer is no, we can't. There is

15 already too much knowledge out there. We know too

16 much about intelligence, and AI, and things. It's

17 just a question of when it's going to happen. It's

18 not something you could legislate, you couldn’t

19 prevent even if you wanted to. It's going to

20 happen. There are too many people all over the

21 world that have access to information, to the

22 essential information, and it -- that information

23 is going to grow.

24 Next question, will it have a mind or an

25 agenda of its own? That's a bit more of a

182

1 complicated question because it depends exactly

2 what you mean by that. Will it have a mind of its

3 own, yes. In some very important sense, yes, I

4 believe so. Will it have an agenda of doing

5 something with its life? I believe the answer is

6 essentially no, unless you specifically design it

7 to do so. And there is not a lot of reason that we

8 would want to design machines that have an agenda

9 of their own. We want them to do things for us.

10 We want them to create value for us. But it's a

11 big question, is there some scenario? What are the

12 scenarios when it could have a mind of its own?

13 And I'll talk a little bit more about that in the

14 morality, moral issues. And then I've already

15 touched on first integrating AGI’s into

16 humans - to sort of soften the blow and make

17 us more comfortable. And the answer

18 is, no, we can't. It's much, much harder for us to

19 upgrade our wetware, to improve humans than it is

20 to build a stand-alone AI, AGI.

21 Now, the two perspectives of AGI, should we

22 welcome it? It’s our savior. Or should we be

23 afraid of it? Do we need AGI to save us from

24 ourselves? Nick Bostrom wrote a good article

25 there that I have in my references analyzing all the

183

1 existential risks; biotechnology, runaway Biotech,

2 or Biotech in the hands of common

3 criminals or something or terrorists who could use

4 it. That's scary stuff. Nanotechnology, Gray goo,

5 there are a whole lot of dangers. There are, of

6 course, a whole bunch of social risks that we face,

7 that we see every day. There are,

8 there are more and more ways in which single

9 individuals or small groups can inflict a lot of

10 damage on society, and that's scary. And AGI

11 certainly could potentially help us in a number of

12 ways. It could provide tools to prevent disaster,

13 it could protect us directly in some way. It could

14 help by uplifting. And by uplifting mankind

15 generally and having fewer people who have a grudge

16 or have a reason to be unhappy.

17 And the other thing that I'll get into and talk a

18 little bit more about is by making us more moral.

19 Again, this is a controversial statement but I

20 really believe there is a lot of evidence and

21 reason to believe that AGI will, in fact, improve

22 human morality in a very

23 individual way.

24 I'll quickly talk about how much

25 of a danger it poses. Now,

184

1 I don't want to say that this is a solved problem

2 or that it's totally clear, a clear-cut case, that

3 we really know how much of a danger it is and we'll

4 certainly be talking on the expanding on the risks.

5 One question I would like to ask though first is

6 what should we be more scared of in AGI with a mind

7 of its own, or one that doesn't? And that an

8 interesting perspective that often isn't looked at

9 because if an AGI has a mind of its own, that mind

10 may well be benign, rational, and moral. If it doesn't

11 have a mind of its own and it's purely a tool in

12 the hands of a human, then it is only as good or as

13 moral as the human. So, in fact, I think not

14 having a mind of its own in a way is much scarier.

15 I believe there is little evidence that AGI by

16 itself can be detrimental to humans unless it's

17 specifically designed to be. Now original

18 applications may have an impact on it so in here

19 I'll say the difference between our company, a2i2,

20 building the first AI or the military. Presumably

21 there is some difference in the psychology of the

22 AI whether it was designed, whether it's whole sole

23 purpose and design has been optimized to

24 kill the enemy or to help humans in their

25 day-to-day endeavors. So but inherently I don't

185

1 believe that there is -- unlike what we see in the

2 movies that there is an inherent propensity for

3 AGI’s to be evil I think that's just plain wrong.

4 So as I mentioned before, the power of AGI in the

5 wrong hands, human hands, is a much bigger concern.

6 And a mitigating factor is the positive moral

7 influence that it could -- that it may just tell

8 people why they maybe should not go ahead with some

9 particular hairbrained scheme because it may not

10 actually have the desired result.

11 I would now like to touch on the human

12 interaction on how we treat AGI’s and how they might

13 treat us. First of all, how should we treat AI’s

14 from a moral point of view? And I think that

15 to answer that question we need to

16 first find out if they actually even desire life

17 and liberty and pursuit of happiness. You know, in

18 the mock trial, of course, we assumed that that was

19 the case. I think actually it's very unlikely that

20 they will in fact want to live, that they will have

21 any such desire. I think those are specifically

22 from evolution. From an

23 evolutionary point of view an artificial

24 intelligence that's designed to service customers

25 over an (800) number or something might be very

186

1 personable to and appear to have human

2 emotions but I don't believe there is any reason to

3 think that it will actually want to live or will

4 have those human characteristics. But how will we

5 treat AGI’s? You know, how are we likely to? And

6 that's a -- that's kind of an interesting question,

7 will they be moral amplifiers, as I would like to call

8 it? Basically will they make bad people worse and

9 good people better? Will they make us

10 more of what we are, bring out our fears, or bring

11 out the best in us?

12 This is not something I've explored to a great

13 degree but I do have a strong sense that inherently

14 they will make us more rational and more moral

15 because they will help us reason through the

16 choices, the decisions that we make. And often

17 just doing that, just thinking through the

18 implications of something we want to do and seeing

19 what the actual effect is likely to be will make us

20 more moral and help us make better decisions.

21 Because ultimately the outcome for most of us is

22 have people be happy and live good lives and so on.

23 Yet many of the short-term decisions that people

24 make, you know, starting wars or whatever, in fact,

25 have the opposite result.

187

1 How will they act towards us? As I said,

2 rationally. They better understand the consequence

3 of their actions than humans do because they will

4 think them through better and they lack the

5 primitive evolutionary survival instincts that are

6 often detrimental to moral behavior.

7 Now, I think the mind boggles as to

8 the impact that AGI will have on society. It's

9 very hard for us to know just what the impacts are

10 going to be, but they’re going to be enormous. It's

11 going to change mankind's society in very, very

12 profound ways. It will impact all

13 areas of our life. You know law and politics and

14 social justice. I can highly recommend “The Truth

15 Machine” by James Halperin, which is an excellent

16 book that explores -- and I have it in the

17 references as well - explores a society where lying

18 isn't normal any more, were telling the truth

19 because of technology and really exploring how

20 that -- he does an excellent job of explaining how

21 that will change society. Tha’s just one

22 perspective.

23 But one can take other areas. For

24 example, if a whole number of tasks that people are

25 currently doing can be taken over by computers in a

188

1 much better way, by AGI, how that -- whatever it

2 will lead to. The least material: Poverty and

3 desperation. On the positive side: Coping with

4 change on the negative side or on the difficult

5 side.

6 And as I said, I believe AGI will help us

7 move up Mazlow’s hierarchy where more people

8 will actually be able to think about how to

9 optimize life rather than just fighting for

10 survival or reacting to their primitive instincts.

11 And that's been well shown that as society has, you

12 know, grown more affluent and their basic needs are

13 taken care of, people tend to be more benevolent.

14 Now, as we use AGI the one thing that becomes

15 clear, we will rely more and more on the advice of

16 an AGI. So if we have like a wise oracle that

17 we -- you know, our personal AGI that gives us

18 advice, it helps us think things through, that