00:00:00.000 We choose to go to the moon and dislocate and do the other things,
00:00:04.500 not because they are easy, but because they are hard.
00:00:08.500 Because that goal will serve to organize and measure the best of our energies and skills.
00:00:16.000 Because that challenge is one that we're willing to accept,
00:00:20.000 one we are unwilling to postpone, and one we intend to win, and the others too.
00:00:28.000 On that Cape Canaveral, this is Kennedy Space Centre.
00:00:32.000 Perhaps now the place on Earth has quite the level of optimism as this place is demonstrated over the years.
00:00:40.000 And although at times there have been the steps in the exploration of the cosmos,
00:00:44.000 people have lost their lives, there have been terrible errors that have been made,
00:00:52.000 if we recognise that although problems are inevitable, they are always soluble,
00:00:56.000 then the next step won't merely be the moon or Mars,
00:01:20.000 Welcome to Topcast and my latest in the series on the beginning of infinity.
00:01:24.000 This has to be very close to my favourite chapter in the book.
00:01:28.000 I've only recently arrived home from a visit to the United States.
00:01:34.000 It's been a long time since I've actually been there,
00:01:37.000 but it was positively buzzing just the way I remembered it.
00:01:41.000 It's really optimistic, it's got this aroma of optimism about it in a way
00:01:45.000 that I guess the locals have probably become accustomed to.
00:01:48.000 Things are tall and bright and industrious and diverse,
00:01:55.000 This optimism is etched into its history so deeply.
00:01:58.000 When I say that, I'm kind of comparing it to places
00:02:01.000 that maybe I'm biased in having visited recently
00:02:04.000 areas of Europe and Asia and of course Australia,
00:02:11.000 these surrounds, they aren't really setting the tone for the brightness.
00:02:16.000 That is this chapter and the optimism that is this chapter.
00:02:22.000 Now, I first started preparing to read through this chapter
00:02:25.000 and to make comments on it around the time that Avengers Endgame,
00:02:31.000 and also around the time the final episodes of Game of Thrones were aired.
00:02:35.000 But actually, I've kind of been more excited about this episode
00:02:38.000 than either of those things because it's chapter 9.
00:02:46.000 but being chapter 9, this places optimism right in the center of the book.
00:02:50.000 At the end of this chapter, we will have covered exactly one half
00:02:54.000 in terms of the number of chapters of the beginning of Infinity.
00:03:00.000 Now, they have been physicists who have noticed before
00:03:02.000 that, which is not prohibited by the laws of physics, must be possible.
00:03:06.000 But David has developed this into a genuine worldview
00:03:09.000 that has infinite rich into every single domain of inquiry.
00:03:12.000 It doesn't matter the subject, problems are soluble.
00:03:22.000 Actually, in that movie, we hear about the Deutsch proposition.
00:03:25.000 Now, I don't know what the Deutsch proposition is,
00:03:29.000 I'd like to know what the script writers were thinking,
00:03:32.000 but for our age, actually, something more along the lines of Deutsch in optimism
00:03:37.000 in chapter 9 could certainly serve as the Deutsch proposition.
00:03:41.000 Maybe something like the couplet problems are soluble,
00:03:50.000 being that all evils are due to a lack of knowledge.
00:04:00.000 and placing the problem at the heart of epistemology
00:04:10.000 solutions found are physically possible transformations
00:04:13.000 that allow us to overcome the obstacle that we've called a problem.
00:04:16.000 These transformations are possible because they begin as computations.
00:04:20.000 Creations in the minds of universal explainers, people.
00:04:23.000 So here we have chapter 9 connected to the beginning of the beginning of infinity,
00:04:30.000 and how these explanations are about what is physically possible.
00:04:36.000 What is physically not possible when it comes to physics, morality, and so forth.
00:04:42.000 and the conditions under which decisions are best made
00:04:44.000 to achieve outcomes that do not entrench error.
00:04:48.000 Optimism is about the connection between epistemology
00:04:51.000 or abstract knowledge creation and physical resources.
00:04:55.000 And we're going to see that in the chapter on unsustainable.
00:05:03.000 And what it is, is to solve our actual problems.
00:05:08.000 And to know that a resource even is a resource takes knowledge.
00:05:12.000 To extract the resources and then use them to create more knowledge takes wealth.
00:05:16.000 And so the cycle leading to progress continues.
00:05:23.000 The possibilities that lie in the future are infinite.
00:05:29.000 this includes not only the openness of the future,
00:05:32.000 but also that, which all of us contribute to it by everything we do.
00:05:37.000 We are all responsible for what the future holds in store.
00:05:52.000 Martin Reese suspects that our civilization was lucky to survive the 20th century.
00:05:57.000 For throughout the Cold War, there was always a possibility
00:06:08.000 But in Reese's book, Our Final Century, published in 2003,
00:06:14.000 that civilization now had a 50% chance of surviving the 21st century.
00:06:21.000 that this chapter could be read alongside the discussion
00:06:24.000 that Reese and Deutsch had at the Royal Society
00:06:26.000 for the encouragement of arts, manufacturing, commerce,
00:06:31.000 What was illuminating in that discussion was the questions
00:06:34.000 that were asked as well as the discussion between Reese and Deutsch.
00:06:38.000 So I wouldn't say that pessimism is merely connected to
00:06:41.000 or even intimately tied up with bad ideas like authoritarianism,
00:06:49.000 It's not merely intimately connected with these things.
00:06:57.000 It's a more fundamental mistake than those others.
00:07:07.000 It is motivating these other bad ideas as well.
00:07:13.000 not just about humans, about knowledge, about technology.
00:07:23.000 This is really part of the cultural discussion at the moment
00:07:27.000 about how certain people in Silicon Valley have just so much money
00:07:31.000 and so much power that really we need to start thinking
00:07:36.000 because otherwise, these people are going to have too much power.
00:07:42.000 And so the concern there is that technology breeds inequality.
00:07:48.000 in these discussions don't leave that inequality is bad.
00:07:51.000 But that's just assumed that we don't go any deeper
00:07:55.000 because the discussion rarely moves beyond the idea
00:08:02.000 That's an assumption, that's a premise we begin with.
00:08:04.000 And of course, if you begin with that assumption
00:08:06.000 then you're led to all sorts of weird conclusions.
00:08:09.000 Rather than thinking that inequality is actually a good thing
00:08:14.000 that it is a sign that people are pursuing their own interests
00:08:22.000 So when people think they've seen something like
00:08:25.000 inequality being magnified or amplified by something like technology,
00:08:30.000 then they call for things like redistribution by call for force,
00:08:35.000 Martin recent that discussion explicitly calls for redistribution.
00:08:46.000 Well, it's actually more of a very long statement actually.
00:08:50.000 But I thought it contained a very important insight
00:08:53.000 and she observed that far from technology increasing inequality,
00:09:11.000 And yes, if you provide value to billions of people around the world,
00:09:15.000 it's not surprising that you become a billionaire.
00:09:23.000 The reason that people have smartphones in the third world now
00:09:28.000 is because they have sufficient wealth to purchase those smartphones.
00:09:34.000 than what they would otherwise have done with that money.
00:09:38.000 This is a glorious thing that really we should be praising.
00:09:41.000 But this is another fracture point between optimists
00:10:01.000 when you look at our situation with technology?
00:10:04.000 When you stand back and you assess what's going on?
00:10:09.000 Some people are getting too powerful and too wealthy.
00:10:12.000 This technology, Apple, is getting too powerful.
00:10:24.000 at the top of these companies who happen to have a lot of wealth?
00:10:29.000 Or are you focused on the great good that has been done
00:10:33.000 by these companies for so many people around the world?
00:10:36.000 And that if you look ahead, there's great openness.
00:10:43.000 these Machiavellian, Mr. Burns like a ring in their hands,
00:10:48.000 thinking about all the ways in which they can use their power
00:10:53.000 Indeed, we've seen recently that there are all sorts of reasons
00:10:56.000 to disagree with the politics of the people in Silicon Valley.
00:10:59.000 But the way in which to combat bad ideas is with good ideas,
00:11:04.000 not with force, optimism says that we can create knowledge
00:11:11.000 And so those evils are simply bad political ideas,
00:11:14.000 which many of the people at the top of Silicon Valley
00:11:17.000 And do you think when you see massive amounts of wealth created,
00:11:21.000 created mind you, not taken from somewhere else,
00:11:29.000 and then sold to billions of people around the world
00:11:31.000 because those billions of people see it as such a benefit.
00:11:39.000 That wealth concentrated in the hands of a few is dangerous.
00:11:43.000 Or do you think what a wonderful thing that these companies
00:11:47.000 can now go ahead and make even better products in the future?
00:11:50.000 Because if they don't, they're going to go the way
00:11:53.000 of so many companies prior to them, which go out of business.
00:11:56.000 I think it's a really entertaining and informative discussion
00:11:59.000 of the RSA discussion on optimism between Deutsch and Reese.
00:12:05.000 it's a wonderful adjunct to add to your understanding
00:12:10.000 So David has just said of Reese that Reese calculated
00:12:14.000 there was only a 50% chance of surviving the 21st century
00:12:22.000 that newly created knowledge would have catastrophic consequences.
00:12:26.000 For example, restarted likely that civilization-destroying
00:12:32.000 would soon become so easy to make that terrorist organizations,
00:12:42.000 such as the escape of genetically modified microorganisms
00:12:54.000 could in the long run be even more threatening he wrote.
00:13:01.000 For instance, it has been suggested that elementary particle
00:13:04.000 accelerators that briefly create the conditions
00:13:10.000 the very vacuum of space and destroy our entire universe.
00:13:13.000 Reese pointed out that for his conclusion to hold,
00:13:16.000 it's not necessary for any one of those catastrophes to be at all
00:13:19.000 probable because we need to be unlucky only once.
00:13:23.000 And we incur the risk of fresh every time progress is made
00:13:28.000 He compared this with playing Russian roulette.
00:13:32.000 In this chapter, David concentrates on two great public intellectuals.
00:13:50.000 who many great public intellectuals today turn to to talk about.
00:13:54.000 But really, there are a smorgasbord of public intellectuals
00:14:11.000 Perhaps my favourite of all of the great pessimists
00:14:21.000 Now, Nick Bostrim doesn't use Russian roulette.
00:14:30.000 And he compares human creativity to pulling a ball out of an urn.
00:14:34.000 And so far, we've been pulling useful balls out of the urn.
00:14:38.000 And sometimes the useful ball that comes out of the urn
00:14:44.000 But what he's very worried about is the black ball.
00:14:47.000 The black ball that might be pulled out of the urn,
00:14:57.000 And it will destroy the civilization which pulls it out of the urn.
00:15:03.000 you can just Google him and go to his own website.
00:15:06.000 He's written many books that concentrate on uber pessimism.
00:15:14.000 the end of the future. So he's very animated about Sam Harris,
00:15:22.000 A very great insight into just how deep pessimism can go.
00:15:28.000 One thing that just to speak to what David has mentioned there,
00:15:32.000 all these different things that could go wrong.
00:15:38.000 Bostrim's worried about rather the same things that
00:15:41.000 they're worried about, which is what Harris is worried about,
00:15:47.000 For example, in the future, we might have something like,
00:15:59.000 in order to fix up your own genetic code when it starts to go wrong at the time.
00:16:06.000 and Bostrim agreed, was that with such a technology
00:16:09.000 where everyone at home can just 3D print their own biological organisms
00:16:19.000 And so you only need one person out of the billions that are on earth
00:16:25.000 that could kill millions or perhaps billions of people.
00:16:28.000 Maybe they could genetically engineer in their home 3D biological printer.
00:16:33.000 The virus that will be as easily spread as the common cold,
00:16:36.000 but more virulent than any virus we've ever encountered before,
00:16:40.000 a virus that will be more deadly than any virus we've encountered before.
00:16:44.000 And they both agreed that this was a terrible existential risk.
00:16:48.000 What confused me is, why is it that the same 3D printing technology
00:16:54.000 can't also be used to print the QR for that particular virus?
00:16:59.000 If with that advance that we're printing viruses,
00:17:02.000 then we're also printing the T cells that can attack the viruses.
00:17:08.000 but there is a crucial difference between the human condition and Russian roulette.
00:17:13.000 But the probability of winning at Russian roulette
00:17:16.000 is unaffected by anything that the player may think or do.
00:17:22.000 In contrast, the future of civilization depends entirely on what we think and do.
00:17:29.000 If civilization falls, that will not be something that just happens to us.
00:17:34.000 It will be the outcome of choices that people make.
00:17:37.000 If civilization survives, that will be because people succeed in solving the problems of survival.
00:17:48.000 And I want to take a, I won't say a deep dive, a shallow dive
00:17:57.000 If there's anyone, as I think I've hinted at, who can compete with Martin Reese
00:18:02.000 for a rationally minded super pessimist, it will be Nick Bostrom.
00:18:08.000 Bostrom has a number of different theses about how the world's going to end
00:18:13.000 or civilization might end, or all of humanity will destroy itself,
00:18:20.000 One of his latest papers on this is called the vulnerable world hypothesis.
00:18:24.000 Now when I look at Nick Bostrom's page, as interesting as it is,
00:18:30.000 like his book super intelligence that I've written in extensive critique of,
00:18:35.000 I find it to be, this isn't supposed to be just purely pejorative.
00:18:41.000 It is genuinely the sense that I get him in reading some of his stuff.
00:18:50.000 It seemed to me to just ignore reality in so many ways.
00:18:54.000 It certainly ignored the ways in which we know knowledge is created.
00:18:58.000 And so if you ignore certain things about how epistemology actually works,
00:19:03.000 how physics actually works, then yes, some of his conclusions follow.
00:19:09.000 Follow from false epistemology, follow from physics that we don't operate within.
00:19:15.000 So to be specific, he is a strong proponent for Bayesianism.
00:19:23.000 That given the past, you can predict the future.
00:19:30.000 Whatever has happened previously is not guaranteed to happen tomorrow.
00:19:35.000 If you'd like to know more about this, just google my name, Brett Hall,
00:19:39.000 And on Google, and it will bring up an article of mine about induction.
00:19:43.000 Or, of course, go to the fabric of reality by David Deutsch,
00:19:50.000 Or, go to Popper, and he wanted his works about induction objective knowledge is a good one.
00:19:56.000 Now, because Boston is someone who likes to prophesy the future
00:20:03.000 and likes to use inductive type reasoning in an attempt to do so,
00:20:12.000 So you cannot, you cannot validly predict the future given the current state of events.
00:20:19.000 What you can do is to take a good explanation, something like Newton's laws,
00:20:25.000 and predict the evolution of a simple physical system over time.
00:20:31.000 You can take a good explanation of the laws we understand chemistry operates under.
00:20:37.000 We might say that given a good explanation, for example,
00:20:47.000 And so if we have a special case, if I take a hydrochloric acid and sodium hydroxide,
00:20:54.000 and I mix these two things together, my prediction will be,
00:20:57.000 based on that good explanation about how these two chemicals acid and base react together.
00:21:01.000 I can predict that given those particular examples of acid and base,
00:21:10.000 This happens because there is a fundamental principle that we're operating within,
00:21:16.000 a good scientific explanation about how certain chemical reactions work.
00:21:22.000 This is not the kind of thing that Nick Bostrum ever has.
00:21:28.000 what any of the great pessimistic profits of our time have are wild guesses.
00:21:36.000 Wild guesses about the ways in which, if people choose to do nothing,
00:21:42.000 or choose to do the wrong thing, that calamities will ensue.
00:21:55.000 This is assuming the worst, the worst will happen.
00:21:58.000 But why worry? Why worry about what the Martin Reese's and Nick Bostrum's of the world,
00:22:10.000 I guess there's a few sociological factors involved here, psychological factors.
00:22:15.000 My own pet theory is that people like watching disaster movies.
00:22:20.000 Really, we get excited by the great asteroid that's heading towards the city,
00:22:29.000 or the aliens that have come to wreak havoc upon humanity.
00:22:33.000 It's exciting to think, you know, what might happen during such a situation.
00:22:39.000 These are ways in which we entertain ourselves.
00:22:42.000 And just because you, I don't know, have a philosophy degree,
00:22:50.000 doesn't mean you're immune from that kind of entertainment.
00:22:53.000 You may not want to spend your time talking about the movie,
00:22:57.000 the War of the Worlds, you know, no one will take that seriously,
00:23:03.000 but it could be fun to talk to our philosopher who basically believes
00:23:07.000 in the reality of all of those kinds of things,
00:23:10.000 and how they're going to happen and has written papers on it
00:23:13.000 with equations and mathematics to try and convince people that,
00:23:37.000 Firstly, super intelligence that I've been very animated about.
00:23:41.000 His thesis here is that systems can be created,
00:23:46.000 such that they're better at us at thinking in every single domain
00:23:54.000 and they might, for example, turn the world into paperclips.
00:24:00.000 you can simply imagine a system like we have today
00:24:04.000 that is better at every single human applying chess,
00:24:13.000 or better at shooting a gun that any human can.
00:24:15.000 I say, OK, so iterate for every single capacity
00:24:24.000 better at everything we know about than a person is.
00:24:42.000 which will allow me to turn the planet into paperclips,
00:24:49.000 and faster than us, and stronger than us, at everything.
00:24:53.000 This is false because the way that people think
00:25:09.000 things that people have never previously thought of before.
00:25:14.000 which is better at us at everything that we can program,
00:25:24.000 Creativity will always be a system that lacks it.
00:25:29.000 And of course, if you have a system that does have creativity,
00:25:34.000 Then it's not going to turn the world into paperclips,
00:25:37.000 because it can be persuaded that turning the world into paperclips
00:25:50.000 the only thing you need to do is to be creative yourself
00:25:58.000 because if they're interested in this particular idea,
00:26:11.000 because then you understand what other people are getting at
00:26:14.000 when they argue that the robots are going to take over the world.
00:26:17.000 This is the kind of thinking that underpins their doomsday scenarios.
00:26:32.000 that these terrible things have a certain chance of happening.
00:26:37.000 And so, therefore, there's a probability associated with them.
00:26:46.000 One thing about Bostrom that Bostrom's writing,
00:27:05.000 where really it's a very simple idea that's being talked about.
00:27:21.000 is because of something called the semi anarchic default condition.
00:27:27.000 What semi anarchic means is that people are competing
00:27:38.000 and the default condition remains that the world gets destroyed
00:27:47.000 where there is a technology produced that is so powerful,
00:27:53.000 that have different ideas about how to use the technology.
00:27:58.000 maybe billions of people that have access to this technology,
00:28:02.000 that can cause the destruction of the entire planet.
00:28:04.000 So, therefore, in a situation where you've got anarchy,
00:28:09.000 of these different people with this super powerful technology,
00:28:32.000 each of whom have the capacity to destroy the world,
00:28:40.000 is going to be a crazy evil god that wants to do that.
00:29:03.000 We're just told that there is such a technology,
00:29:06.000 and by the way, the technology is the black ball.
00:29:17.000 And so far, we've been pulling balls out of urns
00:29:28.000 and the ball is some object of that creativity.
00:29:36.000 But all we need to do is to pull out one black ball,
00:29:38.000 which is a bit of technology that's altogether bad,
00:29:41.000 and we'll destroy the civilization which pulls it out of the urn.
00:29:48.000 Well, because the semi-anarchic default condition,
00:29:50.000 because even if the overwhelming majority of people in the society
00:29:55.000 don't want to use the technology that's discovered,
00:30:00.000 anyone will need one to destroy the entire world.
00:30:30.000 So, of course, he begins with pessimism about people,
00:31:01.000 that the only people that can be trusted are the great philosophers.
00:31:14.000 an assumption of non-fallibillism on the part of,
00:31:17.000 or a lesser form of fallibillism among the philosophers
00:31:20.000 and the smart people and the strong man at the top
00:31:33.000 Again, it's entertaining read as far as it goes.
00:31:39.000 He's got this thing called the type I vulnerability
00:31:42.000 and he speaks about this actually in his podcast,
00:31:46.000 And the essential idea here is that it's possible
00:31:51.000 for individuals to become too empowered to create harm.
00:32:08.000 So he doesn't want individuals to become empowered
00:32:11.000 to create harm or so empowered us to create harm.
00:32:29.000 Empowerment means you could do good or you could do evil.
00:32:52.000 Bostrom is concerned that in the distant future,
00:32:56.000 any one of us will have too much power to do harm.
00:33:07.000 He doesn't want individuals to have too much power.
00:33:17.000 Between those who have an optimistic view of humanity
00:33:20.000 and our people are creators who want to do good,
00:33:25.000 and those who are very skeptical of the entire project
00:33:31.000 And so therefore they demand the kind of collectivism
00:33:52.000 the problems that we have, this one is the most terrifying to me.
00:34:06.000 It's not a denial of the idea that there are evil people
00:34:21.000 And they will make progress faster if they're allowed to.
00:34:25.000 If they're constrained by a state or authorities
00:34:30.000 then yes, their progress is going to be slowed in some way.
00:34:36.000 compared to perhaps people who refuse to obey the rules
00:34:44.000 at the moment what we have is a relative freedom.
00:34:47.000 Such that individuals who are good can pursue solutions
00:35:06.000 And this idea that we have some top down in position
00:35:12.000 because we don't want them to become too empowered to create harm
00:35:27.000 Okay, so that's the vulnerable world hypothesis.
00:35:32.000 Now there's something else called the Doomsday Hypothesis
00:35:38.000 This is something that Bostrom himself didn't invent,
00:35:44.000 And so I'd like to come back and I'll criticize that
00:35:47.000 after a little bit more from the beginning of infinity.
00:36:15.000 because the knowledge that is going to affect it
00:36:38.000 Bayesian reasoning in order to predict, predict,
00:36:42.000 I say predict, prophesy the future of civilization
00:37:00.000 The growth of knowledge cannot change that fact.
00:37:03.000 On the contrary, it contributes strongly to it.
00:37:08.000 the future depends upon the reach of their explanations,
00:37:21.000 the consequences of innovations made during the 20th century,
00:37:24.000 including whole new fields such as nuclear physics,
00:37:37.000 let alone the solutions and attempted solutions
00:37:48.000 Pause there, I'm just my reflection on that short section.
00:38:01.000 to properly imagine what the future will be like.
00:38:12.000 I don't have to profession that you can apply for.
00:38:18.000 and educational institutions to try and predict trends.
00:38:29.000 and then on the basis of the way things are now,
00:38:32.000 extrapolate, guess what the future is going to be like.
00:38:36.000 Of course, this is not just based on the knowledge we have
00:38:44.000 that they have about the present state of technology
00:38:54.000 It's not going to be wildly guessed by a futurist.
00:38:58.000 If it was, then the futurist wouldn't be a futurist.
00:39:05.000 But what a futurist does is wildly guess about the future,
00:39:15.000 and then testing and instantiating these guesses into technology
00:39:28.000 or the probability of an outcome of a phenomena
00:39:30.000 whose course is going to be significantly affected
00:39:42.000 Following Popper, I shall use the term prediction
00:39:45.000 for conclusions about future events that follow
00:39:48.000 and prophecy for anything that purports to know
00:39:58.000 Among other things, it creates a bias towards pessimism.
00:40:00.000 For example, in 1894, the physicist Albert Michelson,
00:40:09.000 so let's call him Albert Michelson for the moment.
00:40:12.000 Made the following prophecy about the future of physics.
00:40:18.000 of all physical science have all been discovered.
00:40:22.000 that the possibility of there ever being supplanted
00:40:24.000 in consequence of new discoveries is exceedingly remote.
00:40:33.000 addressed at the opening of the Ryerson Physical Laboratory
00:40:44.000 I probably should have put, which I do have somewhere other.
00:41:00.000 it says some wonderful things about the progress of science
00:41:03.000 over time, but concludes we've just about discovered
00:41:06.000 everything that we can possibly hope to discover
00:41:22.000 when he judged that there was only an exceedingly remote
00:41:25.000 chance that the foundations of physics as he knew them
00:41:32.000 On the basis of the best knowledge available at the time,
00:41:46.000 It was poorly suited even to imagining the changes
00:41:49.000 that relativity and quantum theory would bring,
00:41:51.000 which is why the physicists who did imagine them
00:41:54.000 Michelson would not have put the expansion of the universe
00:42:02.000 on any list of possible discoveries whose probability
00:42:27.000 He helped to disprove the existence of the ether,
00:42:39.000 does relative motion matter when you're measuring
00:42:43.000 As it turns out, no, the null hypothesis is correct.
00:42:47.000 when you're taking a measurement of the speed of light.
00:42:58.000 of the discoveries in his own field of expertise
00:43:03.000 that were about to change the course of physics completely.
00:43:12.000 this makes absolutely no difference whatsoever.
00:43:15.000 then you'll notice that my camera has disappeared
00:43:20.000 This doesn't go on for the remainder of the entire video
00:43:46.000 in fields far removed from your field of expertise.
00:43:52.000 in all fields that might affect the evolution of civilization.
00:43:57.000 Anyone who pretends to know what's going to happen in the future,
00:44:02.000 the pessimists who are predicting global catastrophes.
00:44:09.000 and in all cases they think that that knowledge
00:44:30.000 had not only been the greatest genius who had ever lived,
00:44:33.000 for the system of the world can be discovered only once.
00:44:37.000 Lagrange would never know that some of his own work,
00:44:40.000 which he had regarded as a mere translation of Newtons
00:44:44.000 was a step towards the replacement of Newton's system of the world.
00:44:48.000 Michelson did live to see a series of discoveries
00:44:51.000 that spectacularly refuted the physics of 1894.
00:44:59.000 had already contributed unwittingly to the new system.
00:45:29.000 merely by correcting a minor parochial assumption
00:45:41.000 but to see the world through our best explanations,
00:45:48.000 among other things, it inhibits us from conceiving
00:46:22.000 Whether or not you've actually shown the theory to be false.
00:46:26.000 Or if there's something about your experimental apparatus,
00:46:33.000 you've conducted the experiment that is flawed.
00:46:49.000 or the progress in science, therefore, isn't possible
00:46:53.000 It's wrong because clearly we make progress in science.
00:46:56.000 So progress in science happens in spite of this.
00:47:02.000 then we can tell which of these two things is actually true.
00:47:09.000 has now been refuted by the experimental result
00:47:17.000 For a case where the theory has been falsified,
00:47:20.000 well, there are all the famous occasions, like, for example,
00:47:28.000 And we do Eddington's experiment of the bending
00:47:35.000 And therefore, we think that Einstein's theory of general relativity
00:47:50.000 because we see neutrinos traveling faster than light
00:47:59.000 in the experimental apparatus, something like that.
00:48:02.000 And so we find that it's not special relativity
00:48:07.000 but rather the experiment itself has been poorly done.
00:48:17.000 This is all well-known to people somewhat versed in
00:48:24.000 But people keep on rediscovering the Jew-hem-kind thesis
00:48:34.000 So if there are no other reasons for doing philosophy,
00:48:42.000 when you see if you've refuted a very well-known objection
00:49:15.000 indeed it's almost a culture in physics remarkably.
00:49:18.000 So according to one school of thought in physics,
00:49:21.000 but maybe this school of thought is no longer on the ascendancy,
00:49:27.000 This one school of thought is that we reduce everything
00:49:32.000 and to unite the forces governing them and we'd be done.
00:49:37.000 So what we should do in physics is aim to reduce everything
00:49:46.000 that are governing all of these fundamental particles.
00:50:03.000 as if gravity and the other forces are everything.
00:50:08.000 But they're not, they're not even everything within physics
00:50:17.000 unites gravity and the other fundamental forces,
00:50:24.000 And, you know, astrophysics, geophysics, biophysics,
00:50:48.000 I always think of this as kind of reminiscent of,
00:51:02.000 that we'd have a chemical theory of everything,
00:51:15.000 The periodic table is kind of the starting point
00:51:21.000 Now, just before I begin reading the next paragraph,
00:51:25.000 I just want to highlight how little progress I've made
00:51:35.000 When the determinants of future events are unknowable,
00:51:44.000 what is the right philosophy of the unknown future?
00:51:47.000 What is the rational approach to the unknowable?
00:51:52.000 So we're kind of still here in the introduction, I'm afraid.
00:52:03.000 but they did not originally refer especially to the future
00:52:18.000 that God being perfect would have created nothing less
00:52:29.000 He proposed that all apparent evils in the world
00:52:50.000 some little kiddies starving on the street corner.
00:52:59.000 and suffering in the way that they were suffering,
00:53:04.000 then something else would be affected elsewhere
00:53:07.000 such that the overall net effect would be worse
00:53:11.000 So this is Leibniz's idea of the best of all possible worlds,
00:53:18.000 who believes in the worst of all possible worlds,
00:53:55.000 a common saying is that an optimist calls a glass half full,
00:54:00.000 but those attitudes are not what I am referring to either.
00:54:22.000 and his specific expectations as wartime leader
00:54:29.000 an notorious prophet of doom of whom more below,
00:54:32.000 is said to have been a serene and happy fellow,
00:54:52.000 seeks to ward off disaster by avoiding everything
00:54:56.000 No one seriously advocates either of these two as a universal policy,
00:54:59.000 but there are assumptions and their arguments are common,
00:55:04.000 Skipping just another short paragraph about the Titanic,
00:55:10.000 that blind pessimism is a blindly optimistic doctrine.
00:55:14.000 It assumes that unforeseen disastrous consequences
00:55:21.000 Notal shipwrecks happen to record record-breaking ships.
00:55:25.000 Notal shipwrecks happen to record-breaking ships.
00:55:31.000 need be caused by physics experiments on new technology,
00:55:34.000 but one thing we do know is that protecting ourselves
00:55:54.000 There will be no existing ship designs to stick with,
00:55:59.000 if no one had ever violated the precautionary principle.
00:56:09.000 Present concerns about climate change, for example.
00:56:19.000 but the people say we've only got 100 years left,
00:56:21.000 certain prominent politicians raise these sort of numbers.
00:56:25.000 Now, they might not even actually be taken too seriously
00:56:32.000 who otherwise haven't heard these ideas before.
00:56:46.000 the world will end in 10 years due to climate change,
00:56:59.000 over the next 10 years is going to change the course of
00:57:06.000 We're not going to be able to create the knowledge.
00:57:15.000 because pessimism needs to counter that argument,
00:57:22.000 In order to bring us to the point where we are right now,
00:57:25.000 progress has happened, so therefore the precautionary principle
00:57:31.000 to counter that argument in order to be at all persuasive,
00:57:35.000 a recurring theme in pessimistic theories throughout history
00:57:38.000 has been that an exceptionally dangerous moment,
00:57:45.000 makes the case that the period since the mid 20th century
00:57:56.000 by the symbol technologies of fire and the sword.
00:58:03.000 Some intentionally, some as the result of plague on natural disaster,
00:58:06.000 virtually all of them could have avoided the catastrophes
00:58:10.000 If only they had possessed a little additional knowledge,
00:58:13.000 such as improved agricultural military technology,
00:58:15.000 but a hygiene of better political or economic institutions.
00:58:18.000 Very few, if any, could have been saved by greater caution
00:58:29.000 More generally, what they lacked was a certain combination
00:58:33.000 a knowledge embodied in technological artifacts,
00:58:50.000 is that of trying to make our planet as unobtrusive
00:58:54.000 for fear of contact with extraterrestrial civilizations.
00:59:07.000 when Christopher Columbus first landed in America,
00:59:10.000 which didn't turn out very well for the Native Americans.
00:59:18.000 or imperialist civilizations who would colonize it.
00:59:22.000 has written some exciting novels based on the premise
00:59:30.000 This would solve the mystery of the Fermi problem,
00:59:33.000 but it is implausible as a serious explanation.
00:59:47.000 in order to hide from them before being noticed,
00:59:50.000 which means before they have invented, say, radio.
1:00:01.000 various changes of not making our existence
1:00:11.000 perhaps to mine what they consider an unhabited system,
1:00:16.000 in addition to the classic floor of blind pessimism.
1:00:19.000 One is the spaceship Earth idea on a larger scale.
1:00:22.000 The assumption is that progress in a hypothetical
1:00:24.000 rapacious civilization is limited by raw materials,
1:00:37.000 Since any civilization capable of transporting itself here,
1:00:40.000 or raw materials back across galactic distances,
1:00:46.000 and hints does not care about the chemical composition of its raw materials.
1:00:49.000 So essentially, the only resource of use to it in our solar system
1:00:52.000 would be the sheer mass of matter in the sun.
1:00:57.000 Perhaps it is collecting entire stars wholesale
1:01:01.000 as part of some Titanic engineering project.
1:01:04.000 But in that case, it would cost virtually nothing
1:01:09.000 which are presumably only a small minority,
1:01:11.000 otherwise it is pointless for us to hide many cases.
1:01:14.000 So would it casually wipe out billions of people?
1:01:20.000 This can only seem plausible if one forgets
1:01:29.000 the idea that they could be beings that are to us
1:01:33.000 as we are to animals is a belief in the supernatural.
1:01:38.000 So this is a crucially misunderstood point.
1:01:48.000 and thinkers on this topic believe in the intelligence continuum.
1:01:57.000 They can't do much except smell their environment and respond to it.
1:02:10.000 And so when you keep moving up the hierarchy of genetic complexity.
1:02:17.000 And then you have people, you know, we're at the top,
1:02:22.000 the difference between us and, let's say, chimpanzee.
1:02:26.000 And a chimpanzee and a dog and a dog and a cat and a cat and a rat.
1:02:37.000 I think there's a qualitative difference between us,
1:02:42.000 and form explanatory theories about the world,
1:02:45.000 which enable us to gain control of the world
1:02:48.000 and every other animal that we know of that exists.
1:02:54.000 rather than the controller of the environment.
1:02:58.000 All other animals are on some kind of continuum, if you like,
1:03:07.000 They can't explain it and they can't control it,
1:03:17.000 An individual bacteria is going to get too hot
1:03:27.000 if the temperature gets too hot or too cold.
1:03:32.000 Humans can, of course, change their environment
1:03:42.000 in order to remain in places that are otherwise hostile.
1:04:06.000 that has such a high degree of intelligence
1:04:12.000 And there could be evil aliens out there as well.
1:04:20.000 It's an ancient cultural meme I would suggest.
1:04:24.000 Going back to the times when they were good and bad gods,
1:04:28.000 they needed a piezing, whether they were good or bad,
1:04:32.000 but the bad gods had all the powers that the good gods had,
1:04:41.000 More and more recently, over recent decades,
1:04:44.000 we've replaced the idea of gods with science fiction entities.
1:04:57.000 And so when actual scientists kind of don't move beyond these ideas,
1:05:10.000 this standard way of thinking that we've had
1:05:15.000 some of them are good and some of them are bad.
1:05:18.000 Once you understand the argument that David Deutsch is making
1:05:22.000 about how progress that happens in one area,
1:05:25.000 physics, epistemology, cannot be completely disconnected
1:05:29.000 from progress that happens in another area, like morality.
1:05:32.000 It becomes pretty obvious why these obvious ways of thinking
1:05:36.000 that other people have are completely false.
1:05:39.000 One assumption, of course, that this all rests upon
1:05:42.000 about these evil aliens is that morality is subjective.
1:05:46.000 And that is close to universally subscribed.
1:05:49.000 And so it's amazing to me that even people who
1:05:52.000 ought to believe in objective morality, like Sam Harris,
1:05:56.000 reject the idea that we would converge on that objective
1:06:03.000 So the aliens would discover a better morality than we do,
1:06:09.000 That would not entail causing us to suffer.
1:06:11.000 That would entail helping us to learn what they've learned.
1:06:19.000 more over there is only one way of making progress,
1:06:24.000 And the only moral values that permits the same progress
1:06:26.000 are the objective values that the enlightenment
1:06:29.000 Not out the extraterrestrial's morality is different from ours,
1:06:32.000 but that will not be because it resembles that of the conquistadors.
1:06:36.000 Nor would we be in serious danger of culture shock
1:06:38.000 from contact with an advanced civilization.
1:06:40.000 It will know how to educate its own children, or AI's,
1:06:46.000 And in particular, to teach us how to use its computers.
1:06:49.000 So again, me talking here, why people think knowledge
1:06:56.000 such that advancements in one area could mean regression
1:07:03.000 Aliens who make scientific progress will make moral progress.
1:07:06.000 We can travel to the other side of the galaxy,
1:07:08.000 then your morality is probably galaxies ahead as well.
1:07:14.000 our artificial general intelligence, the same arguments hold.
1:07:20.000 to reduce suffering, let's say, in the universe,
1:07:24.000 why would aliens or super-advanced artificial intelligence
1:07:32.000 If they can understand so much more about physics
1:07:35.000 and computation and engineering, science broadly,
1:07:41.000 why should we imagine that when it comes to the topic
1:07:46.000 of morality, these super-intelligent beings,
1:07:49.000 be they aliens or artificial general intelligence,
1:07:52.000 why would we imagine they must be more primitive than we are?
1:07:55.000 That they won't care about other conscious creatures.
1:07:59.000 How could they survive as long as they have
1:08:02.000 if they did think that, if they did think that
1:08:05.000 killing other beings was a way to make progress?
1:08:10.000 So this is my book end for the end of part one.
1:08:12.000 As it turns out, this is going to be a three-part epic,
1:08:15.000 but I think if any chapter deserves a three-part epic,
1:08:20.000 I do apologize for long D2 during this particular episode,
1:08:23.000 all about Nick Bostrom, who in fact is to be fair,
1:08:26.000 barely even mentioned in the beginning of infinity.
1:08:30.000 and it's to do with his singularity argument,
1:08:35.000 I suppose this part one has really been an introduction
1:08:38.000 to set the scene for what's to come in part two and three,