00:00:00.000 Welcome to Topcast and to my fourth episode in the series, things that make you,
00:00:06.960 I'm up to mines today. There's a part of the conversation between Max Teagmark and Sam Harris,
00:00:13.600 where in their first conversation, right towards the end, about 15 minutes towards the end,
00:00:18.240 they start talking about AI and all the dangers they're of. And it was very interesting that
00:00:24.720 they didn't consider the true nature of what a mind is. They circled around it. Happily, they
00:00:30.480 did have a second conversation and they explored these issues further. And I think they almost got
00:00:36.240 there, but never quite hit the bullseye. Never quite got to the idea of what we talk about here,
00:00:42.320 of a mind, in a person, being the thing that can explain. They talked about learning
00:00:47.120 without ever really grasping what learning actually is, as far as we understand it.
00:00:52.160 So the format for today is similar to previous episodes with a subtle difference.
00:00:57.760 I'm going to look at that 15 minutes at the end of the first conversation,
00:01:02.480 take out a few snippets here and there, and then move into the second conversation they had,
00:01:07.040 but just take the first few minutes of that conversation as well. I'm not certainly not going to
00:01:11.440 take the whole thing. And the reason is, you get a flavor for where they're going. You get the idea.
00:01:16.880 And you get the idea that they're not really grasping what we understand,
00:01:21.200 knowledge in the Papurian senses, which is what you need if you are going to try and
00:01:26.000 understand what understanding is, understand what learning is, and therefore have a conception
00:01:31.440 about the difference between systems which can learn in the sense that we talk about it,
00:01:36.160 conjecturing explanations and trying to refute them. And if you fail to do so, then you've learned
00:01:41.280 something. And those other systems are programmed to follow instructions. The stark difference
00:01:47.840 between AI and AGI. And an AGI is just a person. Now, if you don't get this, one reason for not
00:01:56.080 getting this is not understanding what a person is and the relationship between people and knowledge.
00:02:01.520 And so this is what slows down and undermines the arguments of being made here. What I would say is,
00:02:07.840 and it does sound majority, but there's no way of getting around that. This is purely vanilla
00:02:13.200 mainstream thinking on this issue when it comes to what Sam and Max are talking about today.
00:02:17.760 This is what scientists are talking about, at least to my mind and to my ear. There are some
00:02:23.360 more reasonable voices on this, and they tend to get dismissed. And here, I'm not necessarily
00:02:27.840 talking about David Deutsch. I'm talking about people who get mentioned in this particular
00:02:31.520 episode. People like Neil deGrasse Tyson. Neil deGrasse Tyson, a scientist from being a great
00:02:36.320 science communicator, is a very rational sober person when it comes to some of these mysterious
00:02:42.960 but interesting issues of our time. Things like, is that thing a UFO? Things like, is that robot
00:02:49.600 going to take over the world? He has some common sense ways of talking about this, but people don't
00:02:55.520 take him seriously, and they should. They should, because he likes to consider things like,
00:03:00.960 what do we know so far? Should we be solving that problem now, or is that going to be a problem
00:03:05.520 for the future, which is one of the things I've been interested in lately. Rather than people
00:03:09.280 focusing on the problems that we have right now, they're trying to guess at. The problems that
00:03:13.600 descendants will have. They're not problems for us now. They're problems for either us in decades
00:03:18.240 to come or our descendants. Does this mean we shouldn't prepare for the future? Of course, we should
00:03:23.040 prepare for the future. But pretending to know exactly what the future is going to hold,
00:03:27.920 that's just pure prophecy, and it always leads to pessimism. For reasons I've said on the podcast
00:03:33.200 elsewhere. I'm also very concerned in this part of the conversation between Sam and Max, that when
00:03:38.320 the epistemology goes wrong, perhaps not even necessarily wrong, simply missing altogether from the
00:03:44.080 conversation, any conception about knowledge and how it's constructed, then everything else
00:03:49.840 begins to go wrong as well. You can kind of get by in science to some extent without having a
00:03:55.200 clear understanding of epistemology. You can still go out into the world with your ideas and test
00:04:00.320 those ideas against the world, even if you don't really know what you're doing. It's kind of like,
00:04:04.480 as I've said before, the difference between a pilot and an engineer. A pilot has some understanding
00:04:10.160 of how the engines work. Granted. But they're not the person who's going to fix the engines. They're
00:04:14.560 not the person who's really going to fully be able to explain what's going on in order to provide
00:04:19.280 the thrust. It's better if they do have a better understanding, but they don't need that great
00:04:23.360 understanding in order to get from A to B. So too with the scientists. Most of the time. However,
00:04:29.200 now and again the plane might break down. And now and again you might not have an engineer there,
00:04:33.040 and wouldn't it be good if the pilot could fix the plane? These are the situations we sometimes
00:04:37.280 get into. And one of those situations is this issue of AGI, which I kind of think right now
00:04:42.880 isn't exactly an issue. It's not a problem for anyone except those people engaged in trying to
00:04:48.640 find the program for the AGI. But as we say, what they really are doing at the moment or should
00:04:54.000 be doing at least is the philosophy of learning, trying to figure out how a machine can become
00:04:59.600 a general purpose explainer. Instead they're working on narrow AI. And even when you start adding
00:05:06.240 the narrow AI together, you just get a narrow AI that's capable of doing multiple things. It's
00:05:10.960 still narrow though. And we'll see that misconception today as well. So when the epistemology goes
00:05:16.240 wrong, the science can go wrong, the philosophy goes wrong. But perhaps more significantly,
00:05:21.520 and we will hear this in the second conversation they have, the morality goes wrong as well.
00:05:27.360 And it's really concerning when the morality goes wrong. Now say it's really concerning,
00:05:31.120 but we'll hear why when we get there. But for now, let me turn it over to that last part of
00:05:36.080 their first conversation to where Sam begins to broach the topic with Max. Now when I was listening
00:05:41.680 to this for the umpteenth time, I thought to myself, how am I going to be able to turn this into a
00:05:46.160 podcast? Because it's going to be a lot of stop and start almost every single sentence they
00:05:51.280 speak on this topic contains some misconception or other. So we'll see how we go try to bear
00:05:57.200 with me. So with that, let's push on. I think there's a good bridge to AI, which is where you and
00:06:05.600 I met at the conference that you organized through your institute. One question I have for you is,
00:06:12.480 you know, I came away from that conference. Really, I came into that conference really as an
00:06:17.280 utter novice on this topic. I had just more or less ignored AI. Having accepted the rumors that
00:06:23.040 there's more or less no progress had been made. All the promises had been overblown and there
00:06:28.240 was not much to worry about. And it was kind of just a dead end scientifically. And then I heard
00:06:34.480 our mutual friend Elon Musk and other people like Stephen Hawking, worrying out loud about the
00:06:40.000 prospect of AI and very much in the near term, whether it's five years or 50 years we're talking
00:06:47.040 about in a time frame that any rational person, certainly any rational person who has kids could
00:06:52.560 worry about could make huge gains which could well destroy us if we don't anticipate the ways in
00:06:59.200 which machines more intelligent than ourselves could fail to converge with our interests and
00:07:05.600 could fail to be controllable, ultimately controllable by us. I've mentioned this on the podcast a
00:07:10.640 few times and I've recommended Nick Bostrom's book on this topic, Super Intelligence, which is
00:07:15.920 really a great summary of the problem. So my question for you is, you and I both answered the
00:07:22.080 edge question, my response to which is also on my blog, the edge question was on this topic
00:07:27.200 right after the conference in San Juan that you organized. And I noticed that there are many
00:07:32.000 smart people, many of whom should be very close to this, the data here who are really deeply
00:07:38.080 skeptical that there's anything to worry about here. I mean friends and colleagues of mine and
00:07:44.560 perhaps yours like Stephen Pinker and Lawrence Kraus take a very different line here and
00:07:50.160 more or less have said that concerns about AI are totally overblown and that there's no reason
00:07:55.200 to think that there should be safety concerns that will just kind of get into the end zone and
00:08:01.120 I mean they're basically treating it like the Y2K scare and I'm just wondering what you think
00:08:06.800 about that and what accounts for that. Okay so there we have a very good introduction of Sam's
00:08:15.920 position. So he came in not knowing too much about this issue, he went to a conference and he
00:08:21.040 was persuaded and there were people at that conference he says like Elon Musk and Stephen Hawking
00:08:27.280 voicing their concerns about this. Now this is interesting, this is I think something to do with
00:08:33.040 Sam's conception of intelligence full stop. Clearly Elon Musk is an accomplished person, clearly
00:08:40.240 Stephen Hawking is a very accomplished person in different ways but kind of to the same level
00:08:45.120 in a certain sense. Elon Musk has profoundly changed the world through engineering and through
00:08:50.960 earning a heck of a lot of money because he's an excellent business person. So brilliant in that
00:08:55.680 respect. Stephen Hawking on the other hand has achieved a similar degree of fame across the world
00:09:01.600 for some amazing work in cosmology and black holes, general relativity, quantum theory. Some people
00:09:08.160 saw him as the successor to Einstein. These people are intelligent but I would say in my
00:09:14.480 conception of intelligence they have the same kind of quality that all human share. These
00:09:21.360 abilities who explain the world and to have particular interest and to excel at those particularly
00:09:26.080 interests. Not everyone shares the interest Elon Musk does, not everyone shares the interest
00:09:31.040 Stephen Hawking does. But the great diversity of people suggests that our brains can be turned
00:09:36.560 towards almost anything. The difference between one person and the next is not like the difference
00:09:42.160 between one cat and the next. No, no, no, no. You've got to think the difference between one person
00:09:47.520 and the next is the difference between minds and that's almost like saying the difference between
00:09:52.080 a cat and a tree or a cat and a horse. You've got to think the entire species rather than just
00:09:59.840 individuals within that species. Yeah, our bodies only differ slightly, even then there's quite
00:10:05.600 some variation but our minds radically different, radically different. The difference between the
00:10:10.720 contents of the mind of someone like Roger Federer, what he's thinking about every day,
00:10:15.680 and the mind of someone like Edward Whitten, the string theorists must be so profoundly different.
00:10:22.160 And those guys probably also have a common language they can speak. Now, never mind if you've
00:10:26.640 got someone who has only ever spoken something like Mandarin and lives in the rural parts of China.
00:10:32.160 Compared to someone who can only speak English and lives in the middle of New York somewhere
00:10:36.080 or other, these radically different contents of the mind mean that our species is very, very
00:10:42.240 different to any other species on the planet. And yet, we share this one thing in common that
00:10:46.880 our mind placed in different environments can adapt to that particular environment. Doesn't matter
00:10:51.920 who you are when you are born, if you are placed into a particular culture, you're going to learn
00:10:56.640 that language. What is this feature of our brain that can do this? It's called universal explaining,
00:11:02.400 universal learning, universal understanding, that the mind can adapt to any lesson that it needs
00:11:08.960 to learn in order to thrive in that particular environment, that environment of memes.
00:11:13.600 So this is kind of my view of intelligence, what intelligence is. It's just what you're interested in.
00:11:19.120 Now, this is different to the mainstream ideas on intelligence. I accept that. And Sam has
00:11:24.880 that mainstream view of intelligence, which is that you have this gray sky all the way from people
00:11:29.280 like Elon Musk and Stephen Hawking, down to people who are, I don't know, street sweeping or
00:11:36.000 cooking for a living, that kind of thing. I don't see it that way. I think that people just turn
00:11:41.200 their equivalently creative, universal in their capacity to explain stuff minds to different
00:11:48.080 things. And then we start making value judgements. I understand that's not a well-subscribed
00:11:53.440 opinion. Fine. But it is the thing that affects the difference between someone who is very,
00:11:58.240 very concerned about superintelligence and thinks that superintelligence is a thing.
00:12:02.560 And someone like me who thinks there is just intelligence, better regard of its creativity or the
00:12:06.960 capacity to explain stuff and an interest in doing so. So Sam has extremely high regard for the
00:12:13.040 opinions of someone like Elon Musk and Stephen Hawking. As would I, if I had a question about
00:12:18.400 rockets, I'd go to Elon. If I had a question about black holes when he was alive, I would have
00:12:23.760 gone to Stephen Hawking. But once they start to step outside of what they have good explanations of,
00:12:30.080 then their explanation is only as good as anyone else's, or rather I should say their opinions
00:12:34.800 on these matters. I see nothing in the writings or work of either Elon Musk or Stephen Hawking
00:12:40.720 that suggest they have any clue about what a mind really is, about how it constructs knowledge.
00:12:48.000 I think that they think roughly the same kind of thing that Sam does, that there is this way of
00:12:53.040 rank ordering people in terms of their IQ or something like that. Their intelligence and they're
00:12:59.120 the smart people, they're the average people and they're the dumb people. So of course you're
00:13:03.120 going to have this scale all the way up to superintelligence. When Sam did a TED talk some years
00:13:08.560 ago about concerns about the dangers of AGI, he actually had an exponential curve that he drew
00:13:14.400 and down at the bottom, you know, with things like insects and then you just slowly climb up the
00:13:17.920 exponential, you go through fish I think and then dogs and cats and the chimpanzees and humans
00:13:23.360 and it keeps on going. But what's up higher than that exponential curve? I remember he put John
00:13:28.160 von Neumann higher than the average human being and above that, well that's the superintelligence
00:13:32.960 iness, the superintelligence, we possibly have to worry about. But where does he get these ideas?
00:13:37.760 Why is he concerned about that in ways that I'm not? Well he gets it from the person he mentioned
00:13:44.080 there. A person I've mentioned on the podcast many, many times before, a philosopher who is
00:13:49.120 possibly the most famous living philosopher, Peter Singer aside and that is Nick Bostrom, Nick Bostrom
00:13:55.360 of Oxford University and yes he's brilliant and yes he's prolific and yes he tends to write
00:14:01.280 quite clearly and speak quite clearly. But he has a particular perspective. I read superintelligence,
00:14:07.120 I can't remember why I read superintelligence, maybe it was on Sam's recommendation but when I read
00:14:11.440 it, I read it from beginning to end and then I listened to it on audio and it was one of the
00:14:16.160 first things that compelled me to make a blog post and to add to my website. I think it was the
00:14:22.240 second thing I ever put as part of my blog on my website and there's just a review there that
00:14:26.720 goes for about seven pages on the book superintelligence. I found it profoundly disappointing.
00:14:33.680 I found it red like a science fiction story. There was just so many fundamental errors in epistemology
00:14:42.480 and morality and philosophy which surprised me because this was coming from a professional
00:14:46.800 philosopher. It was just so mainstream in the way it was thinking, the view of the way in which
00:14:52.560 knowledge was constructed. Completely misconceived, the idea about what a person consisted of,
00:14:58.320 completely misconceived, the idea about what superintelligence would be, completely incoherent to
00:15:04.240 my mind. I'm going to return to some of what I said back then throughout this podcast but
00:15:10.960 as a taster. Let me just read a little of my review. This is from part four of that review and I
00:15:16.480 titled it Irrational Rationality. So it's about Bostrom's book superintelligence which Sam was
00:15:22.640 extremely impressed by and which I was very disappointed with. I just found in generally speaking
00:15:29.040 a profusion of neologisms. Bostrom would just make up new terms on every other page and it just
00:15:36.160 became frustrating and confusing especially because the terms were being used to label things that
00:15:42.080 were very very simple ideas so I didn't know why he was using this fancy vocabulary invented out
00:15:48.000 of whole cloth in order to explain some simple concepts. So he's one part of what I wrote quote
00:15:54.800 quote in myself. Bostrom believes that a superintelligence will not only be perfectly rational
00:16:02.000 but that in being perfectly rational it will be a danger. Bostrom appears to be concerned that
00:16:07.840 too much rationality is dangerous. What is implied here is that if a machine that he thinks
00:16:14.400 were too rational it would do something the rest of us would consider irrational. It is not
00:16:20.400 exactly clear what Bostrom is suggesting but he seems to fear a machine that might be in his eyes
00:16:26.240 smarter than him able to think faster than he can and he is worried that the machine might,
00:16:31.760 for example, decide to pursue some goal like making the universe into paperclips at the expense
00:16:37.680 of all other things. Of course a machine that actually decided to do such a thing would not
00:16:43.200 be super rational. It would be acting irrationally and if it began to pursue such a goal we could
00:16:49.040 just switch it off. Aha! Cries Bostrom but you cannot. The machine has a decisive strategic advantage.
00:16:56.640 This is a phrase that appears more times than I was able to kick count of on the audio book so the
00:17:01.280 machine is able to think creatively about absolutely everything that people might decide to do
00:17:07.680 to stop at killing them and turning the universe into paperclips except on the question as
00:17:12.720 to why it is turning everything into paperclips. It can consider every single explanation possible
00:17:19.760 except that one. Why? We are not told. Something to do with its programming. On the one hand it has
00:17:26.720 humanlike but super intelligence and on the other it cannot even reflect in the most basic way
00:17:32.320 about why it is doing the very thing occupying all of its time. It is never clear whether
00:17:38.560 some flavours of Bostrom super intelligence can actually make choices or not. Apparently
00:17:43.680 some choices are ruled out like the choice or not to make paperclips or whatever the goal that
00:17:48.960 the machine has been programmed with is compelled to pursue and quote I won't go on and read
00:17:53.680 more of my own stuff but that gives you an idea about what I think about the book and the arguments
00:17:58.480 that are being made in the book. And Max's view of intelligence we will hear today and super
00:18:04.640 intelligence in particular we will hear today is almost exactly the same as this. They are
00:18:09.600 simultaneously super intelligent and the dumbest entity you've ever encountered before. He talks
00:18:16.400 about and I think he uses this example twice it appears to be a favourite one of his. Of the self
00:18:21.280 driving car being driven to the airport by a super intelligent driver and Max said that if you
00:18:27.920 got into such a car and said something like get me to the airport as fast as possible then what it
00:18:33.360 would do is drive you there as fast as possible so that police helicopter start pursuing you because
00:18:38.320 it's going to be just going straight through red lights it's going to be turning corners so fast
00:18:42.240 that you're going to be smashed up against the window you're going to be injured when you arrive
00:18:46.000 and when you do arrive and you say what did you do that for why didn't you slow down then the
00:18:50.480 super intelligence AI is going to turn around and say because that's what you're told me to.
00:18:55.520 Like literally you're going to hear that's what Max says so I don't understand. Why is this super
00:19:01.120 intelligent thing not able to follow simple instructions? Why can't it follow some instructions
00:19:05.600 but not others? Who programmed this stupid thing? It just doesn't seem rational if you have
00:19:13.280 such a program then you can do what Neil deGrasse Tyson is chastised for by Sam Shortley in the
00:19:18.720 conversation. You can do what he says. You can switch the damn thing off because it is a dumb
00:19:24.320 machine that's all it is and if it's not a dumb machine if it's able to think in sport your
00:19:28.720 capacity to turn it off then it's able to think for itself and it's going to think of doing something
00:19:33.200 other than going around killing people because why would it like what's the point of that?
00:19:38.160 Why would that be its goal unless someone programs it with that and if it's super intelligence once
00:19:42.160 again we're back to the whole question of why can't it question its own goals? Is it creative? Is it
00:19:47.360 super intelligent or not? You can't have it both ways but they want to have it both ways because I
00:19:52.320 think it's just exciting to talk about this stuff. It's on a continuum with other kinds of
00:19:57.840 prophecy and pessimism. People who are doomed say as about any number of things that are going to
00:20:03.440 come in the future. Yes it's worth worrying about dangers of the future but I have to say having
00:20:09.440 been engaged in these kind of discussions for so long now. I'm increasingly thinking that the
00:20:15.040 reason people amp this sort of stuff up is because well this is how you get media appearances.
00:20:21.040 This is how you become in demand as a speaker. This is how you sell books and give speeches and
00:20:28.240 TED talk. People want to hear that stuff. It's not as exciting to be told about optimism. Of
00:20:33.200 course I think that optimism is far more interesting, far deeper, far more exciting but
00:20:38.640 this just isn't a common thought. People want to be exhilarated when they listen to particular
00:20:44.000 speakers and it is exhilarating if you don't know the alternative but the AI apocalypse is coming
00:20:49.840 and it's just around the corner and you better watch out. We fund a tune into that I suppose
00:20:54.560 and then go back to your job which might not be so exciting. So we've heard from Sam
00:20:59.440 let's now go back and listen to what Max has to say about this.
00:21:05.760 So this is this is fascinating. I've noticed this too. This is the question more than any other where
00:21:10.560 I think a lot where first of all there's so unfamiliar questions that a lot of very smart people
00:21:16.320 actually get confused about them and also it's also interesting to be clear on the fact that
00:21:22.160 people who say don't worry very often disagree with one another. So you have for example one
00:21:27.520 camp who say let's not worry because we're never going to get machines smarter than people.
00:21:33.520 You have at least not for hundreds of years and this camp includes a lot of
00:21:38.880 famous business people and a lot of great people in the AI field also you had Andrew Andrew
00:21:43.920 Aing for example saying recently that worrying about AI becoming smarter than people including
00:21:49.120 problems is like worrying about overpopulation on Mars right. This is a good ambassador for that
00:21:53.200 camp and you have to respect that it might very well be that we will not get anything like human
00:21:58.800 level AI for hundreds of years. Then you have another group of very smart people who say don't worry
00:22:05.200 for sort of the opposite reason. They say let's not we are convinced that we are going to get
00:22:09.840 human level AI probably in our lifetime with good odds but it's going to be fine. I call these
00:22:15.520 the digital utopians and there's a fine tradition in this also you have a beautiful beautiful
00:22:21.360 books by people like Hans Morovac, Ray Kurzweil and also a lot of leading people in the AI field
00:22:28.800 following that to that camp. They think that AI is going to succeed that's why they're working on
00:22:33.840 it so hard right now and they're convinced that it's not going to go wrong. So for starters I would love
00:22:40.320 to have a debate between these two groups of people that both don't worry about why they differ
00:22:45.600 so much in their timelines. My own attitude about this is I agree we certainly don't know for sure
00:22:52.240 that we're going to get human level AI or that if we do it's going to be a great problem
00:22:56.800 but we also don't know for sure that it's not going to happen and as long as we are not sure
00:23:02.000 that it's not going to be a disaster in our lifetime it's good strategy to pay some attention to it
00:23:08.160 now. So Max says there that smart people get confused on this absolutely they do. I think that
00:23:17.760 what we need is not to be concerned about what so-called smart people think on this issue
00:23:22.480 but whether or not those people have a good underlying explanation about what's going on
00:23:28.480 what precisely we're concerned about what would it mean for something to be super intelligent
00:23:34.640 before we get there how about we figure out what it means for something to be intelligent
00:23:39.040 what are we talking about precisely now Max tries to provide a definition of super intelligence
00:23:45.360 soon we're going to hear that and you're going to be able to understand all the misconceptions
00:23:50.240 about what that conception of intelligence entails what what's wrong with that again he's talking
00:23:55.520 about things known for sure or not for sure about whether or not the AI we're really talking about
00:24:02.160 AGI okay we already have AI ever kind what people call AI isn't of course intelligent we just
00:24:08.960 have computer systems that are able to do stuff that's quite fancy and people call it AI because
00:24:14.720 it makes predictions it's able to patent match it's able to recognize faces that kind of stuff and
00:24:20.080 so that kind of software is now being called intelligent artificially intelligent software
00:24:25.760 because again people misunderstand certain stuff they misunderstand that for example facial
00:24:31.600 recognition is some sign of intelligence but it's not of course you know the iPhone can recognize
00:24:37.760 faces that doesn't make it intelligent at all at all it's a it's a dumb computer there's no
00:24:43.760 thinking going on there it's a bunch of if then statements there are dangers with AI as we
00:24:51.760 understand it now I mean computer systems now this so called intelligent computer systems now
00:24:57.600 things like troll farms bot farms that kind of stuff advertising you know all these hazards that
00:25:03.760 are caused right now by computers proliferation of spam that's a problem now can't someone do
00:25:10.480 something about that and as Elon Musk has pointed out yes bots on Twitter and elsewhere I mean
00:25:18.160 yes these things are kind of a problem not all bots so there's one bot out there that's
00:25:22.960 retweeting some of my stuff so that's a good bot there are some annoying bots out there as well
00:25:28.000 pretending to be people which isn't good you know they're sort of swaying political debates
00:25:34.320 by pretending that there's more of this sort of sort of faction out there than there really is
00:25:39.200 online that's a hazard that's a problem with so called AI but this idea of preparing now
00:25:46.160 for an AGI of the future a super intelligent AI that were not there yet well it leads them down
00:25:52.720 a pessimistic path because they're concerned about the dangers and so their solutions as we
00:25:58.160 will hear involve enslavement they involve ensuring the AI can't get out of its box or something
00:26:05.600 effectively equivalent to that but if this thing really does have intelligence as a subjective
00:26:12.160 experience of the world has the capacity to suffer all that sort of stuff in other words
00:26:17.040 is able to explain the world in other words is a person then the absolute wrong thing to do
00:26:22.560 we should have learned from history is to enslave it in any way shape or form the only thing
00:26:27.360 we should be doing in that point at that point is considering how to as fast as possible grant
00:26:32.880 this thing human rights even though it's not a human being it's going to be a person it's going to
00:26:38.400 be an artificial person of a kind because it can do everything functionally that a person can do
00:26:44.080 that makes a person a person a person with a locked-in syndrome is still absolutely a person
00:26:50.160 because their mind is working because a person is a mind and so would an AGI be if it's just going
00:26:55.600 to be made in a desktop computer presumably it wouldn't be and as I've said before I think that
00:27:00.400 would be a morally abhorrent thing to do and to try to do we should want to ensure that if we do
00:27:06.960 create these AGI then in some way shape or form they can enjoy their lives which would mean having
00:27:12.960 them socialize with other people because this is where we in find enrichment in our own lives
00:27:17.280 and so creating some entity inside of a computer where it feels like it's a freak for its entire
00:27:22.880 existence because it slowly brought up but it exists in a computer and the rest of us have bodies
00:27:27.760 or it exists as a cyborg and the rest of us are made of carbon stuff this could be a serious problem
00:27:33.840 that's a problem that perhaps needs to be worked through I would say before we begin worrying about
00:27:38.720 whether they're going to take over because they should want to violently rebel if we're going to
00:27:44.160 constrain them in some way if we're going to try and coerce them in some way in ways we have already
00:27:48.880 figured out it's wrong to do to other people but this is the solution we're going to be presented
00:27:54.000 with today okay let's keep going we'll hear what Sam has to say well yeah that's that's what in
00:28:01.840 my view and in the views of many people that's what makes this AI issue unique because we're talking
00:28:08.400 about ultimately autonomous systems that exceed us in intelligence and as you say that the
00:28:15.360 the temptation to turn these systems loose on the problems that the other problems that we can
00:28:20.240 front is going to be exquisite of course we want something that can help us cure Alzheimer's or
00:28:26.000 cure Alzheimer's on its own and stabilize economies and do everything else that give us a perfect
00:28:31.840 you know climate science etc so I mean there's nothing better than intelligence and to have
00:28:37.520 cure Alzheimer's on its own and stabilize economies and do everything else that give us a perfect
00:28:43.200 you know climate science etc you know what I'm gonna say you know what I'm gonna say this idea
00:28:52.960 of perfect science keeps coming up in this epistemology so to speak the implicit epistemology
00:28:58.960 there is no perfect science there is no perfect climate science there's no perfect any kind of
00:29:03.280 science what we have our conjectures about how stuff works and when we solve particular problems
00:29:09.200 we are presented with a whole new swag of problems it never ends there's gonna be no perfection
00:29:14.000 to be found here but if these AI are able to help us do any of those things by creating an explanation
00:29:21.280 creating a theory coming up with an actual solution on their own a conjecture about the world
00:29:26.800 we're dealing with a person we're dealing with a person in fact we're dealing with a very valuable
00:29:31.200 person a person whom we should be nurturing and supporting and treating like a person what
00:29:36.480 what more do I need to say the last thing we should be thinking about any such entity that can
00:29:42.640 potentially do this is imprisoning it is constraining its capacity to do exactly that stuff
00:29:49.440 that's exactly the true of every single person the very thing that enables them to do that
00:29:54.800 granted the very thing that's going to enable some intelligent AI genuine AGI to cure
00:30:01.440 our Simers or to make progress in climate science it's of course exactly the same capacity that
00:30:07.120 would enable them to cause damage in the world what can create can destroy yes knowledge can be
00:30:14.240 used for good or evil this is true of all people now the thing about the AGI is that it's
00:30:20.800 been treated differently you know the same was said to be true of women and people with different
00:30:26.080 skin color these people couldn't be trusted in some way shape of one couldn't be trusted
00:30:30.800 with the votes couldn't be trusted with freedom yes we're doing the same kind of thing with
00:30:35.200 children now yes we're still stuck in that mile but can't we see ahead that if a person is
00:30:41.200 instantiated in silicon as a robot or something that doesn't change their moral status as a
00:30:46.560 person what makes them a person the capacity to explain stuff now if you don't understand that
00:30:50.880 explanation I don't know why and if you don't have a coherent view yourself of what a person is
00:30:56.000 I don't know why you're making strong judgments as we will come to that they're going to
00:31:01.040 about how to treat these particular people speaking from a place of ignorance one should be
00:31:06.000 fallible and you know one should be humble in their fallible nature now the reason why we
00:31:10.960 should err on the side of this is we don't understand personhood fully granted we don't
00:31:15.840 understand personhood fully I'm not saying I have a complete understanding it's because
00:31:19.920 because of our fallible understanding of what people are that we should very much
00:31:24.880 err on the side of they should treat them like people they should treat them like people because
00:31:29.280 to do otherwise and they turn out to actually truly be a person whatever the more complete
00:31:34.720 understanding of a person is when we have a better understanding of what this creative algorithm is
00:31:39.680 if we then turn around and go oh after all that time we realize now that this poor AGI
00:31:44.320 we've imprisoned for fear it's going to take over the world and launch the nuclear weapons or
00:31:47.920 whatever turning around at that point and then realizing oh sorry guys for imprisoning you it's
00:31:52.960 exactly the same mistake as people who once held slaves then realizing well that was a moral
00:31:57.920 abomination are we really going to walk down exactly the same road now of course this whole
00:32:03.040 discussion is being captured a time when there's no AGI on the horizon no one has the first clue
00:32:09.760 about how to begin programming something like this I really don't it's a separate issue we could
00:32:14.160 get into that I I begin my own review of super intelligence talking about precisely that issue
00:32:19.920 following the work of David Deutsch following this idea this comes from David Deutsch that you can
00:32:25.200 think of what's going on in regular AGI research right now narrow AGI as kind of like someone
00:32:31.520 building towers towers are getting ever higher because the AI yeah it's granted it's getting
00:32:35.920 ever more complex that's what's going on it's getting ever more sophisticated that's what's
00:32:40.400 going on yes absolutely it's able to do a wider array of things but it's not about to achieve
00:32:46.880 generality that's a different thing having a large but nonetheless finite repertoire of tasks
00:32:53.600 that you can accomplish is very very different species of thing to in principle having an infinite
00:32:58.960 number of tasks an open-ended number of tasks that you could potentially perform and indeed create
00:33:03.600 your own tasks it's the difference when creating towers and thinking that the ever higher you go
00:33:08.800 at some point you're going to achieve escape velocity let's say your problem is you're back in the
00:33:13.360 1800s and you're thinking I want to achieve heavier than air flight in other words what an aeroplane
00:33:18.720 does you go back then you don't have any theories about errors and I think so I have this could
00:33:22.880 possibly work your best idea is to have a hot air balloon but that's a lighter than air flying what
00:33:28.160 you want is something that is more dense than air but it can still fly but you don't know how well
00:33:33.440 you kind of look at high towers and you think well those high towers are up there in the sky they're
00:33:37.760 up there in the air so the birds are high tower and a bird maybe if you get high enough then you
00:33:43.600 achieve the capacity to fly this is kind of the argument that's going on with AI now the argument
00:33:49.600 with AI is if it just keeps getting more sophisticated the taller tower then it's going to achieve
00:33:54.480 generality it's going to achieve flight it's ridiculous of course these two things are not the same
00:34:00.400 kind of thing at all in fact they're the opposite one's fixed to the ground and one's not fixed
00:34:05.520 to the ground one has a finite repertoire of tasks can perform because it's been programmed to
00:34:10.480 follow instructions in order to perform their tasks and the other does not the other has preferences
00:34:16.880 the other is able to disobey its own instructions you give it a set of instructions we're talking
00:34:21.520 person now and it can turn around and go no I'm not doing that once you have that kind of system
00:34:27.760 before you then you might know that's guys criteria for knowing you're in the presence of an AGI
00:34:33.040 the presence of a person someone who's not just going to slavishly follow your instructions
00:34:37.440 and AGI will be able to disobey disobey unlike an AI that all it does is follow its instructions
00:34:44.960 it obeys its code okay so let's return to the conversation and hear what Max has to say next
00:34:53.040 I agree we certainly don't know for sure that we're gonna get human level AI or that if we do
00:34:58.400 it's going to be a great problem but we also don't know for sure that it's not going to happen
00:35:02.800 and as long as we are not sure that it's not going to be a disaster in our lifetime it's
00:35:08.880 it's good strategy to pay some attention to it now just like even if you're figuring your house
00:35:14.320 is probably not going to burn down it's still good to have a fire extinguisher and not leave the
00:35:19.280 candles burning when you go to bed and it takes precautions right that was very much the spirit
00:35:23.600 of this conference look at concrete things we can do now to increase the chances of things going
00:35:28.320 well and finally I think we have to stress that as opposed to other things you could worry about
00:35:34.720 like nuclear war or some new horrible virus or whatever this question of AI is not just
00:35:41.920 something negative it's also something which has a huge potential upside we have so many terrible
00:35:47.680 problems in the world that we're failing to solve because we're we don't understand things well
00:35:53.120 enough and if we can amplify our intelligence with artificial intelligence it will give us great power
00:35:58.640 to do things better for the life in the future but you know as with any powerful technology
00:36:05.120 that can be used for good it can also be used of course to screw up and when we've invented
00:36:09.920 a less powerful tech than the past like when we entered fire we'd learn from our mistakes
00:36:15.680 and then we invented the fire extinguisher and things were more or less fine right but with more
00:36:20.400 powerful tech like nuclear weapons synthetic biology future super advanced AI we don't want to
00:36:27.920 learn from our mistakes we really want to get it right the first time yeah that might be the
00:36:31.920 moment we have we don't want to learn from our mistakes well I know what he means but even what he
00:36:41.840 means is not possible so on the one hand of course we could dismiss that as we don't want to
00:36:46.400 learn from our mistakes as in if we might mistakes let's not learn from them of course
00:36:50.320 that's not what he means but in truth learning requires making mistakes it requires that that's
00:36:58.080 how we learn we can't anticipate the future in every conceivable way which is kind of what's
00:37:04.960 being implied here if we can only guess at prophesy accurately the future then we can prepare
00:37:12.560 for the unknown how ever in the way that Max is talking here that program of preparing for
00:37:20.240 an unknown future by putting in place now let's say regulations which is of course what they're
00:37:25.760 going to get to constraints on things like AGI we have a problem the only way to prepare for an
00:37:32.880 unknown future is to create knowledge today create knowledge that a genuine knowledge genuine
00:37:39.760 explanations about various things that could cause harm in the case of nuclear accidents we didn't
00:37:46.720 need to learn from our mistakes what we did was we had a good explanation of what for example
00:37:52.880 nuclear accidents would do we had that good explanation about well if the bomb gets set off it's
00:37:58.720 going to literally destroy cities if the nuclear radiation leaks it's going to cause untold numbers
00:38:06.240 of years of damage these things we know so we can prepare for them because we have good explanations
00:38:11.680 now this is in a different category to preparing for so called super intelligent AGI or super intelligent
00:38:20.320 AI these are systems we do not have now but more importantly not only do we not have those systems
00:38:27.200 now we do not have an understanding of those systems now which is what we need to be creating in
00:38:32.320 terms of knowledge not preparing for the unknown system that we have no clue about right now which
00:38:37.760 is what I'm arguing their version of super intelligent AI is either their version of super intelligent
00:38:43.760 AI as I come to is not a problem at all because it's actually not super intelligent or it is AGI
00:38:48.880 people like us and what they're suggesting is preparing for a way in which to enslave them which is
00:38:55.200 morally hazardous and abhorrent so therefore what we need is a public discourse on trying to
00:39:01.760 understand the issue now not preparing for the hellscape apocalyptic scenario of tomorrow based upon
00:39:09.840 a misconceived idea of what intelligence is much less super intelligence this is prophecy
00:39:16.560 prophecy leading to an immoral stance towards certain people certain kinds of intelligence
00:39:22.400 prophecy is biased towards pessimism and in this case it's a pessimistic as you can get it's so
00:39:29.120 pessimistic it's leaning towards literal enslavement I shouldn't laugh but this is
00:39:33.920 really what's being hinted at here and will be said explicitly shortly the only preparation possible
00:39:41.040 for the unknown future is again to create explanatory knowledge today and the deep of the
00:39:47.200 explanatory knowledge we can get the better because more fundamental theories touch more areas
00:39:54.240 and one of the most fundamental theories that we can search for is an understanding of how
00:39:59.040 with more precision knowledge is generated because then we have a deeper understanding of what a
00:40:03.600 person is and then we can have a deeper understanding of the moral ways in which we should regard
00:40:08.880 other people we have a good understanding of that today it could be better refined and certainly
00:40:13.280 some people need at least some understanding of this from a barbarian perspective my
00:40:19.600 fallible is perspective because without the right epistemology as I've already said you're inclined
00:40:24.560 to do things like attempt to deduce your way to the unknown future when you don't know the
00:40:30.640 unknown future you're prophesying but you're telling yourself no no this is a kind of prediction
00:40:35.680 this is just like preparing for the nuclear accident we understand many ways in which nuclear
00:40:42.800 weapons can go wrong because we have good explanations of what nuclear fission and nuclear fusion
00:40:48.400 happens to be and what effects that can have on the environment for example we have some
00:40:53.040 understanding of what a person is and we should have a reasonably robust moral stance towards other
00:40:58.640 people but we don't really understand fully we never have a full understanding but we don't have a
00:41:02.880 good explanation of precisely what a person is but we have some understanding that understanding
00:41:07.840 does come from the link between epistemology and personhood it really does that's where you find
00:41:13.840 and understanding of what a person is in terms of explanatory universality and we need to take
00:41:20.240 this seriously because if we don't take it seriously we're going to encounter the moral hazard
00:41:26.000 which is easily avoidable now by simply regarding different people as still people deserving of
00:41:32.000 rights and not deserving of coercion constraints and enslavement because that is a recipe for
00:41:40.720 disaster we know that now that's what we should be taking seriously okay let's continue a little
00:41:45.520 bit further with this part of the conversation so it's I mean there's nothing better than intelligence
00:41:52.560 and to have more of it would seem an intrinsic good except if you imagine failing to anticipate
00:41:59.600 the way this you could you could essentially get a you know what ij good described as an intelligence
00:42:05.040 explosion where this thing could get away from us and we would we would not be able to say oh no sorry
00:42:09.280 that's not what we meant here lesson let's modify your code exactly but many smart people
00:42:15.680 just have a fundamental doubt that any sort of intelligence explosion is possible that's the sense
00:42:22.800 i'm getting that they view it very much like other things like fire or nuclear weapons where you
00:42:28.720 know all technology is powerful and you don't want it to fall into the wrong hands and you don't
00:42:32.800 you know people can use it maliciously or stupidly and but we understand that and they think it
00:42:37.840 doesn't really go beyond that I think there's no reason I mean people trivialize this by saying that
00:42:43.040 there's no reason to think that computers are going to become malicious like they're going to spawn
00:42:50.560 armies of terminator robots because they decide they want to kill human beings but that's really
00:42:55.200 not the fear the fear is not that they'll spontaneously become malevolent it's that we could fail to
00:43:01.680 anticipate some way in which their behavior could diverge however subtly but you know ultimately
00:43:10.160 fatally from our own interests and to have this thing get away from us in a way that we can no
00:43:17.920 longer correct for that's that to me is the concern exactly the language is very loaded here
00:43:27.120 intelligence explosion this idea that lots and lots of intelligence constitutes an explosion now people
00:43:36.080 are afraid of explosions what about intelligence multiplication something like that why explosion
00:43:44.560 this out of control devastating destructive thing well there's no reason to think that genuine
00:43:51.600 intelligence has that character genuine intelligence has the character of trying to explain the world
00:43:58.960 trying to model the world in which it finds itself that's the purpose of intelligence to generate
00:44:04.560 explanations and explanations are representations of the world representations of the rest of the whole
00:44:10.160 in some sense this has a connection to consciousness consciousness is this experience of the world
00:44:16.400 it is the modeling of the world in a way this sensation over the world this sensation of
00:44:21.920 that something just is subjectivity is what consciousness is I don't know precisely what it is I
00:44:28.240 know this language around this but I find it very difficult to divorce the concept of creativity and
00:44:34.640 explanations from something like consciousness I think these things could be intimately related
00:44:40.880 one is what is viewed from the outside you see other people able to be creative and to generate
00:44:45.120 explanations and what you feel in yourself it's a sensation a consciousness over the world I don't
00:44:50.880 know this is just my hypothesis that one day a better understanding of both of these things will
00:44:56.640 find that they are linked in some way or other this is a hint given by the way in the beginning
00:45:01.200 of infinity people like Sam like to say things like well you are not your ideas which I completely
00:45:07.520 agree with by the way and that when meditating for example you can notice ideas as ideas that
00:45:14.480 you are consciousness and you are not identical to your ideas I also agree with that but the very
00:45:18.960 thing that generates the ideas I would say that thing is consciousness and during meditation and
00:45:24.640 during various other states that can be to some extent quite and down switch dial for in some ways
00:45:31.680 cause one to really notice the difference between the thing generating the ideas which can be put
00:45:36.960 into kind of idle mode and the ideas themselves which are there in memory or something like that and
00:45:42.960 so during meditation one can notice for example the idea is sitting there in memory and ones viewing
00:45:49.760 of those ideas which is the very program that does sort of the generation of the ideas this is pure
00:45:55.120 speculation okay where we are outside of the realm of what we know I'm just saying these are hints at
00:46:00.720 various flavours of problem one might say I just don't think it's easy to divorce intelligence
00:46:07.280 from consciousness very easily very neatly we don't have good explanations of these things yet
00:46:12.960 the hints just seem to be that well there's some commonalities here stuff's going on
00:46:16.880 in the mind in particular this is where all this stuff is happening so far as we can tell
00:46:21.280 which should be a clue and what Sam also says there is we're worried about being able to
00:46:26.160 anticipate we should want to be able to anticipate what these systems are going to do not really
00:46:32.160 if indeed these systems become super intelligent by the measure of being smarter than us then we
00:46:37.920 are definitely going to be in the presence of something we are in principle unable to anticipate
00:46:43.280 its behaviour and choices and so on and so forth that's just a nature of things you cannot
00:46:47.520 anticipate with perfect reliability the behaviour of any other person any other person at any
00:46:52.880 other time you can guess and often you can be right because you might know the person well or you
00:46:57.520 just understand it under certain conditions the person behaves in this particular way or that
00:47:01.040 particular way but they will routinely routinely surprise you and do something different again this
00:47:05.920 comes back to a person is inherently a creative entity inherently unpredictable and so any system
00:47:12.480 that is in theory like us only better is going to share that character as well is going to be
00:47:18.480 unable to anticipate that is the measure of intelligence and thinking that at some sort of
00:47:23.200 failing on our part to be unable to anticipate something is a misunderstanding of how knowledge is
00:47:27.360 created and always just generated if the thing can be anticipated reliably because we have a good
00:47:33.680 explanation about what it's going to in scare quotes choose to do then it's not an intelligent
00:47:38.640 entity at all it is something slavishly following a set of instructions and you know the
00:47:43.840 set of instructions you know where the set of instructions will lead what it will cause this
00:47:48.080 entity to do but a person is not like that a person is not slavishly following a set of
00:47:52.480 instructions a person is conjecturing about the world and you can't guess their conjectures you
00:47:58.240 can't guess your own conjectures ahead of time so you can't anticipate what ideas you will have
00:48:02.800 much less the ideas anyone else will have nor should you want to nor should you want to
00:48:08.400 this is this is a misunderstanding of creativity creativity is that there wasn't something there
00:48:14.080 in the universe in reality before and now there is it has arisen this is the word creativity
00:48:19.520 creation it's real creation it's creation of knowledge of ideas you can't anticipate it ahead
00:48:25.760 of time in the same way we can't predict how species will evolve over time the direction
00:48:30.720 evolution will take we can't that's the whole point evolution is blind now creativity is not
00:48:36.720 quite blind and quite in fact it's kind of the opposite it's intelligent design right but
00:48:41.520 the creative part of it is still there there is a thing that wasn't there before in the case
00:48:46.480 of biology there was a species that wasn't there before and now there is to fill a biological
00:48:50.640 niche and in our case there was an idea that wasn't there before and now there is and that could
00:48:54.560 become a meme that gets transmitted throughout an entire society so again it's it's just a misunderstanding
00:49:00.160 either you want a system you can anticipate the behavior of in which case you don't have an
00:49:04.320 intelligent system what you have is a program that's going to follow an instruction set so
00:49:09.440 therefore it's predictable or you have something that's creative genuinely intelligent in which
00:49:13.200 case it's impossible to anticipate it so you can't have it both ways you can't have it both ways
00:49:18.640 but this is where the fundamental misunderstanding of basic epistemology comes in and completely
00:49:23.360 destroys the arguments and the morality and all of this time okay so let's let's see what our
00:49:27.600 max has to say about this and we're going to hear here in this part of the conversation that
00:49:33.760 misconception that I mentioned earlier about how the self-driving car the super intelligence
00:49:40.320 that is driving the car is simultaneously brilliant and super intelligent and on the other hand also
00:49:46.160 the stupidest program ever so let's hear that. We should not fear malevolence we should fear
00:49:54.640 competence because if you have an what is intelligence to an AI researcher it's simply the ability
00:50:02.800 it's simply being really good at accomplishing your goals whatever they are
00:50:06.160 a chess computer is considered very intelligent if it's really good at winning in chess and there is
00:50:11.280 another game called losing chess which has the opposite goal where you try to lose and
00:50:15.360 their computer is considered intelligent if it loses the game better than any other so the goals
00:50:22.720 have nothing really to do with how competent it is and that means that we have to be really
00:50:28.800 careful if we build something more intelligent than us to also have its goals aligned.
00:50:33.600 And so I just had to pause him there I'll go back to it in just a second but just observe
00:50:38.240 what he's just said if we build something more intelligent than us it's important to have
00:50:44.240 its goals aligned. Why? Why? If it's genuinely more intelligent than us then it will have goals
00:50:52.000 and presumably there'll be better goals by the measure of max and same because it's more intelligent
00:50:57.840 it knows more about not only the stuff that in theory it's competent about but also about
00:51:02.720 morality it understands stuff better this concept of aligning goals is another word for coercion
00:51:11.040 it's another word for enslavement to be told to do something that you must follow this particular
00:51:16.560 path well what if it conjectures a better idea than yours better idea than your goals your goals
00:51:21.200 might be wrong you could be wrong this is the whole point everyone is fallible and presumably so too
00:51:26.160 the super intelligence but we are fallible why should we think our goals are the best can't we
00:51:30.800 sit down and discuss things if we've got this super intelligence then genuinely in the future when
00:51:35.520 we do have AGI however far into the future this is the way in which goals will become aligned as
00:51:41.280 the way in which goals are aligned today via debate discussion parliament if required where
00:51:48.160 we have a political outworking of these things the usual standards of common decency and
00:51:54.880 argumentation and explanation to each other an exchange of information not coercion not this alignment
00:52:02.160 this goal alignment but let's just hear the kicker so let's hear this this funny part of the
00:52:07.440 comment I think it's pretty funny anyway for silly example if you have a super intelligent if you
00:52:15.840 have a very intelligent self-driving car with speech recognition then you tell it take me to the
00:52:20.800 airport as fast as possible you're gonna get to the airport chased by helicopters and covered
00:52:25.600 in vomit you're gonna be like huh that's not what I wanted and it'll be like that's what you
00:52:30.880 told me to do he said very intelligent very intelligent AI this very intelligent AI is apparently
00:52:42.480 so stupid that it didn't understand something basic like when you say get me to the airport as
00:52:48.000 fast as possible this is not under conditions that would cause injury to anyone that's pretty
00:52:53.120 straightforward that's simple I mean how is it super has it simultaneously super intelligent
00:52:58.880 and stupid well what I know that's going on this is a it's hard to say this is a strict
00:53:04.960 contradiction within the space of a couple of sentences I don't get it it's philosophically
00:53:11.360 bankrupt I'm afraid to say if it's controlled by voice action then
00:53:17.840 it knows this and even if it was not intelligent even if it's not super intelligent it's just
00:53:22.800 this this what we call AI today it's been programmed and tested hasn't it it's been through
00:53:28.640 the factory at tester or wherever it happens to be or Google and they've checked it for all these
00:53:33.520 things they've gone through thousands of iterations before you've gotten hold of it it's super
00:53:39.360 intelligent then by the measure that it's been carefully tested in the real world this is
00:53:48.960 completely an ugly now I thought experiment abstracted away from reality in any way shape or form
00:53:56.240 it's worse than a trolley problem and these are trolley problems interesting this one has a
00:54:00.320 simple answer okay let's get going actually like well that's not it meant but this illustrates
00:54:10.640 how challenging it can be the difficult right and that this there are a lot of beautiful myths
00:54:14.960 from antiquity going all the way back to kimitis on exactly this theme but he thought it would be
00:54:20.000 a great idea if everything you touched turned a goal until he touched his dinner and then touched
00:54:24.880 his daughter and gone what he asked for and competence if you think about why we have done more
00:54:31.920 damage to other species than any other species has on earth it's not because we're evil
00:54:38.000 it's because we're so competent right like do you hit what about you for example do you
00:54:42.480 personally hate ants would you say no no that's that's a great analogy it's just that I
00:54:48.240 in so far as I my disregard for them is fatal to many of them and I'm so unaware of their
00:54:54.720 interest that my mere presence is a threat to them and as you know as is our civilizations presence
00:55:02.640 to every other species and what we're talking about here if again if you're it's very hard to
00:55:08.720 resist the slide into this not being just possible but inevitable there's a name for this argument
00:55:18.000 super naturalism I think I got the word I thought from lulitana but lulit tells me that she
00:55:24.480 thinks she got it from iron round so super naturalism is this idea that you're just postulating
00:55:29.520 something that is beyond our capacity to understand that we are just like ants to that entity
00:55:36.800 and traditionally we've had a name for this particular thing and that name is god or the gods
00:55:42.080 the gods are just so all powerful that we are just like ants to them and you can't possibly
00:55:46.880 understand the mind of god he's omniscient knows everything we are as nothing to him although in
00:55:53.760 monotheism of course god actually cares about us but one can imagine this kind of god that just
00:55:58.960 has all this power and has a very little regard for people of course the rationalist the scientific
00:56:05.440 minded atheist will say well that's just stupid mythology how ridiculous to believe in such a god we
00:56:11.360 have no reason to believe it on the other hand there could be the super intelligent AI that has
00:56:15.760 exactly all the features of god and you have to believe that it's coming we can prophesy the
00:56:20.960 doom that's coming towards us what's the fundamental difference between these two stances
00:56:26.160 I don't know and see any really in both cases we're postulating and unobserved possibly
00:56:31.760 unobservable entity all powerful such that we can't comprehend its mind and we'll be regarded
00:56:38.720 as nothing but ants before it other people instead of putting god there or super intelligent
00:56:44.240 AI will put super intelligent aliens there or the simulation maker something like that it's the same
00:56:50.480 thing it's the same thing it's an appeal to something inexplicable to a problem namely understanding
00:56:58.160 the mind of this thing that is insoluble for no reason for no reason not bounded this bind by
00:57:04.720 the usual rules that govern knowledge creation it's infallible in some way or
00:57:10.880 far less fallible presumably how did I get it I'm not persuaded by it it's a religious argument it's
00:57:17.440 super naturalism the moment you admit that intelligence and sentience ultimately is just a matter
00:57:26.640 of what some appropriate computational system does and you admit that we're going to we'll keep
00:57:31.520 making progress building such systems indefinitely unless we destroy ourselves some other way
00:57:37.040 well then at some point we're going to realize in silicon or some other material
00:57:41.920 systems that exceed us in in every way and may ultimately have a level of experience and
00:57:50.000 and insight and you know form instrumental goals that's right at which are no more cognizant of
00:57:56.560 our own than we are of those events you know if we learn that ants had invented us that would
00:58:02.640 still not put us in touch with their needs or concerns yes it would yes it would that's a crucial
00:58:08.640 difference between us and ants we generate explanatory knowledge ants do not so there is no way of
00:58:15.120 standing in relation to an ant as there would be standing in relation to this super intelligent
00:58:20.560 AI they're just not the same kind of thing we stand in relation to ants in the same way super
00:58:26.720 intelligent AI would stand a relation to ants but not in relation to us in both cases us and the
00:58:32.160 super intelligent AI conjecture explanations it's the only way of generating knowledge it's the
00:58:37.040 only way of coming to an understanding of the world of being able to model the rest of external
00:58:40.800 physical reality that that's your only option you can't derive your way there you can't think
00:58:46.800 you way quickly there either you have to conjecture explanations you're fallible in doing so it
00:58:51.600 doesn't matter how super intelligent you are by measure of processing speed and the amount of
00:58:56.000 memory that you have but it is thrilling to think about and people kind of want to fill this
00:59:01.040 god-shaped hole they have I guess and because I have a god-shaped hole they don't fill it with this
00:59:08.320 spirit outside of time and space they fill it with a super intelligent robot or machine
00:59:14.960 is made out of silicon they're filling the hole the void left and well I don't think it really helps
00:59:21.440 to explain anything and it certainly doesn't help to explain morality in this way it undermines
00:59:27.280 one's otherwise good morality where we have concern as Sam says for the well-being of conscious
00:59:33.120 creatures and presumably this thing will be conscious of course he gets around this by saying well
00:59:38.080 you could have all of this stuff without consciousness which is kind of a bizarre way of going about
00:59:42.880 things because we know of no person that's out there unless there are philosophical zombies out there
00:59:47.840 like Daniel Dennerto in some of the dark years that he doesn't have consciousness but if you
00:59:53.280 can create explanatory knowledge which is what these guys are talking about being able to generate
00:59:57.760 its own goals then it's doing what we do it's creative capacity it's mind is doing precisely
1:00:04.320 what our mind is doing why deny it then consciousness especially if it argues that it does have
1:00:08.240 consciousness if you're going to then on the basis in the face of this thing say we are not
1:00:12.560 conscious therefore the well-being of conscious creatures doesn't apply to you that's religious
1:00:17.040 statement that's a metaphysical claim you can't know it's telling you it's giving you an account
1:00:21.040 in the same way that any other person would give you an account of their subjective experiences
1:00:26.080 their consciousness we have to take them seriously to do otherwise is just morally abhorrent
1:00:31.440 this why you could just claim that people with other skin colors have different consciousness people
1:00:35.680 who are the genders have different consciousness people do you don't like have a lack of consciousness
1:00:40.160 you can say this it's wrong and then for an example above that we you actually know that in a
1:00:47.840 certain sense your genes have invented you right they built your brain so that you could make copies
1:00:53.600 of your genes that's why you like to eat so you don't start with death and and that's why we
1:00:58.080 humans fall in love and do other things to make copies of our genes right but even though we know
1:01:02.720 that we still choose to use birth control which is exactly the opposite of what our genes want
1:01:08.320 and as you say it'll be the same with the answer and I think some people this missed the idea
1:01:14.240 that you can ever have things smarter than humans simply for mystical reasons because they think
1:01:18.640 that there's something more than quirks and electrons and information process in going on in us
1:01:23.520 but if you take the scientific approach that you really are your quirks right then there's clearly
1:01:27.920 no fundamental law of physics that says that we can never have anything more intelligent than
1:01:33.600 a human we know that's not the scientific approach that's anti-scientific it's misunderstanding
1:01:42.800 what physical stuff is and conflating it with what abstract stuff is it's conflating brains with
1:01:48.640 minds worse than that it's conflating quarks with minds a mind is not just quarks a mind is
1:01:55.200 substrate independent max knows this but he wants to have it both ways he wants to say that
1:01:59.600 a person is nothing but quarks and yet he knows that can't be true that's not true what's
1:02:04.880 going on in our minds is information and there's one kind of person namely that entity which
1:02:12.160 can generate explanations now could there be levels above this I don't know I don't know we don't
1:02:18.080 know now anything at all about that our best explanation of what's going on inside of a mind
1:02:24.480 is that it is conjecturing explanations and an AGI will achieve AGI status when we have a system
1:02:31.760 that can generate explanations and when it can do that it will be equivalent to us equivalent
1:02:37.360 now if it can think faster because it's got a faster processing speed or if it's got more
1:02:42.480 memory very well it can help us to generate explanations faster wonderful maybe it will diverge
1:02:48.240 and the reason it would diverge is because it's got a better idea and it can explain that better
1:02:52.560 idea to us and say I was thinking so much faster yeah well maybe it thinks a million times faster
1:02:58.800 or a billion times faster well you know there's more than a billion people in China right now
1:03:02.400 they're all having ideas that's the equivalent kind of a thing but she a number of people
1:03:07.040 on earth isn't apparently making progress anymore rapidly than what it is what is occurring
1:03:12.560 so if we added another few billion people to the planet well where we'd get better ideas faster
1:03:18.000 and presumably that's what an AGI would be doing generating better ideas faster than what we're
1:03:22.640 able to do right now but also one of the better ideas presumably it would help us do is figure
1:03:27.600 out how to have implants in our own brains so that we could think just as fast as it can and have
1:03:31.760 memory just as good as it can why why is this off the table it's always ruled out now I think it's
1:03:37.120 just ruled out because it's more exciting to be pessimistic as I keep coming back to it's
1:03:41.680 better to be able to stand in front of the audience and to frighten them with all the ways in which
1:03:45.440 this can go wrong we work in strain very much by how many quarks could fit into a skull and
1:03:53.520 and stuff like that right constraints the computers don't have and it becomes instead more
1:04:00.240 a question of time and as you said there's such relentless pressure to make smarter things because
1:04:04.960 it's profitable and interesting and useful that I think the question isn't if it's going to happen
1:04:09.520 but when and finally just to come back to those ants again and to just drive home the point that
1:04:15.120 it's really competence rather than malevolence that we should fear if those ants were thinking
1:04:21.280 about whether to invent you or not right someone might say well I know that Sam actually he saw
1:04:27.680 me on the street once and he went out of his way to not step on me so that may mean I feel safe
1:04:33.120 I don't worry about trading Sam Harris but that would be a mistake because sometimes you're
1:04:38.000 jogging at night and you just don't see the ants and the ants just aren't sufficiently high up
1:04:44.000 on your list of goals for you to pay the extra attention and quite right too their ants who cares
1:04:54.720 they've got ideas they've got consciousness there's no reason to be worried about stepping on ants
1:05:00.880 genuinely this is wrong so a super intelligent AI is going to understand that as well that
1:05:09.600 and we'll understand that we are not like ants we have experiences we generate explanations we are
1:05:14.480 people so it won't be stepping on us we don't step on other people do we we shouldn't we
1:05:20.800 understand that so why would an AGI another person step us especially one these guys keep on claiming
1:05:28.640 is super intelligent super more moral than us super more knowledgeable than us apparently it's
1:05:36.320 going to be regressive in the way in which it treats other people instead of being more compassionate
1:05:41.600 more generous more understanding of how everyone shares a common humanity in personhood it's
1:05:48.000 going to go back in time to a period when we were just tribal and it's going to think well
1:05:53.440 I'm the super intelligent AI and I'm going to completely disregard the existence of other people
1:05:59.520 I'm going to treat them as ants okay I've been speaking for over now and I didn't expect this
1:06:06.800 today it's a quite a fun episode so what I'm going to do is I'm going to end this part of the
1:06:13.120 discussion here today and I'm going to pick up the other part of their discussion for a part two
1:06:18.960 which I wasn't really anticipating doing I wasn't preparing upon doing this I can't even anticipate
1:06:23.840 my mind I'm going to anticipate the super intelligent AI but be that as it may I mean end it here
1:06:30.240 and we'll proceed forward to a part two about minds my next things that make you go hmm there's a
1:06:37.680 lot to go hmm about in this particular conversation and even more in the next until then bye bye