00:00:00.000  And you just computed the hypotenuses all day, every day.
 
      
      
      00:00:17.000  You and I have fallen into a deep, deep obsession
 
      
      00:00:23.000  And I am so looking forward to talking about this
 
      00:00:26.000  because I've been pretty much thinking of nothing else
 
      00:00:29.000  but this subject for the last five or six days.
 
      
      00:00:34.000  there will be some writing coming out from both of us,
 
      
      
      00:00:39.000  So I don't know, man, like, what are your thoughts?
 
      00:00:42.000  Yeah, I've lost many work hours thinking about this
 
      
      00:00:48.000  For me, it's been a fascinating space because I've been,
 
      00:00:53.000  I've considered myself a part of the EA community
 
      
      
      
      00:01:05.000  I would have said, I guess I'm sympathetic to long-termism
 
      
      00:01:10.000  And I think that came out in our second or third conversation
 
      00:01:13.000  or something where we started butting heads even then
 
      
      00:01:16.000  And then you turned me on to some other thinkers
 
      
      00:01:21.000  And in that time, I've pivoted basically 180 degrees
 
      00:01:25.000  to thinking this is actually an incredibly dangerous idea
 
      
      
      00:01:33.000  it inoculates itself for various reasons that we'll get into
 
      00:01:36.000  from criticism and inoculates itself from criticism
 
      
      
      00:01:46.000  And so I've actually become very worried about it
 
      
      00:01:51.000  Yeah, it's a weird place for me to stand, I think,
 
      
      
      
      
      00:02:06.000  and has generated some of the most important moral ideas
 
      
      00:02:13.000  So donating based on impact rather than just the emotional
 
      
      
      
      
      
      
      00:02:31.000  Like, we have finite resources, and we have to dedicate them
 
      
      
      00:02:38.000  So I think taking, like, a problem-solving mentality
 
      00:02:40.000  to altruism is also just, like, been a fantastic idea.
 
      
      
      00:02:50.000  I think, possibly one of the best ethical ideas
 
      
      
      00:02:58.000  of our moral circle to people living in different countries
 
      
      
      00:03:06.000  it's been, like, a huge force for good in my life
 
      
      00:03:11.000  has been, like, life more fulfilling and stuff.
 
      00:03:13.000  And so I'm coming at this conversation from a point of,
 
      
      
      00:03:21.000  And so, you know, I view the community as, like,
 
      
      
      
      00:03:33.000  But then there's, this is one idea that's popped up
 
      
      
      
      
      00:03:45.000  and to, like, deviate them so fully from their original goal,
 
      00:03:49.000  which was to be objective and data-driven in their approach.
 
      00:03:52.000  And so, yeah, I've become very, very concerned about what I see here.
 
      00:03:58.000  So I guess I'm cognizant of the fact that we have
 
      
      00:04:03.000  We have an audience of people who are in the community
 
      
      00:04:10.000  And I think we'll have heard nothing but praise for the idea.
 
      00:04:14.000  That may be a bit strong because I would say I'm only really in the community
 
      
      
      00:04:22.000  And then there's another group of people who have no idea what the hell we're talking about.
 
      
      00:04:28.000  No idea why we both have been obsessing about it.
 
      00:04:30.000  No idea why we're both, like, very worried about it.
 
      00:04:33.000  Like, I think the last time I was as worried about an idea,
 
      
      
      
      00:04:46.000  And so before we dive into the nitty-gritty details,
 
      00:04:50.000  which will be coming in both audio and written and all sorts of different forms,
 
      00:04:57.000  we should address the people who aren't in the community
 
      00:05:01.000  and don't know what we're talking about, I think.
 
      00:05:03.000  Yeah, I think it's worth it to also probably just step back and recall what the EA community is.
 
      00:05:07.000  I feel like we have to point out the word "serb" a lot.
 
      00:05:10.000  So EA, the context stands for effective altruism.
 
      00:05:13.000  And this is a movement that's been around, I don't know,
 
      00:05:16.000  since the mid-2000s or something, it's got sort of various different origins,
 
      
      00:05:22.000  But the general idea is to use evidence and care for reasoning
 
      
      00:05:29.000  And so, like I was saying earlier, this is a problem to be solved.
 
      00:05:32.000  And the truth of how to do the most good as possible
 
      
      00:05:41.000  And sort of, so the community has just been built up of people
 
      00:05:46.000  who are sort of obsessed about this question, right?
 
      00:05:49.000  Like whether it's in the donation space to do with global poverty
 
      00:05:53.000  or the animal welfare space, the question is like,
 
      00:05:56.000  how do we allocate our time and resources to do as much good as possible?
 
      00:06:00.000  And so William McCaskill is the, is he the founder or is he just the,
 
      
      00:06:09.000  He would at least be the co-founder, I would say.
 
      00:06:12.000  Okay, 'cause, so I heard about him a while ago and he, like,
 
      00:06:17.000  he was so impressive because he gave the play pump story,
 
      00:06:22.000  which I know will be familiar to half of the audience,
 
      
      00:06:27.000  And so I'll just recount it here where I'll probably get some of the details wrong,
 
      
      00:06:32.000  But there is this idea of putting a, like, a merry-go-round in impoverished villages
 
      
      
      00:06:42.000  And a merry-go-round would also be attached to a water pump.
 
      
      
      00:06:48.000  And the idea was that you could basically get two for one,
 
      00:06:51.000  where so the kids would have this, like, amazing experience
 
      00:06:54.000  spinning the pump round around and join themselves and then the town would get
 
      00:06:59.000  sanded, like, clean water, or perhaps it was, like, electricity.
 
      
      
      
      00:07:09.000  And so this garnered a huge amount of donations and everyone was, like,
 
      00:07:13.000  really excited about this idea because everyone thought they were doing a lot of
 
      
      00:07:20.000  But the problem with the play pump is that there was no data.
 
      00:07:25.000  They had stopped basically looking at the data.
 
      
      00:07:31.000  It was just this belief that what they were doing was good for the world.
 
      00:07:38.000  But it turns out that the play pump was absolutely terrible.
 
      00:07:41.000  It was terrible because the kids would get tired and wouldn't enjoy, like,
 
      00:07:45.000  spinning it around and so they'd have to get, like, elderly women to move the
 
      00:07:49.000  stupid thing and I guess it was just way less efficient than just bringing in,
 
      00:07:54.000  like, actual sanitation, like water treatment plants and stuff.
 
      00:07:59.000  And so the lesson was that we can't just assume that what we, like, think we're
 
      00:08:06.000  doing that is making the world better is actually making the world better.
 
      
      
      00:08:14.000  We need to constantly be, like, asking ourselves, like, can we be wrong?
 
      
      
      00:08:21.000  And so William McCaskell kind of brought this to people's attention.
 
      00:08:26.000  And then I believe he also started the, this amazing, giving what you can pledge.
 
      
      
      00:08:34.000  So that was started by him and Toby Orr, actually, who, yeah, just thought that
 
      00:08:39.000  they could publicize the idea of giving 10% of your income to the most effective
 
      
      00:08:46.000  And that most people could do this without sacrificing anything major in their
 
      
      00:08:51.000  And therefore, this is like worthy of doing so.
 
      00:08:53.000  Yeah, they started this, this, given what we can organization and an
 
      00:08:58.000  prominent feature of that is the giving what we can pledge.
 
      
      00:09:01.000  People were donating way more money than they would have.
 
      00:09:04.000  And not only were they donating it to charity organizations, but they're
 
      00:09:08.000  donating the charity organizations that were committed to looking at the data
 
      00:09:12.000  and constantly checking to make sure that what they were doing was effective.
 
      
      00:09:18.000  With this new idea of long termism, the entire concept of looking at the data is
 
      
      
      
      
      00:09:29.000  I mean, it's like keep all the math, but they lose all of the parts that attach
 
      
      00:09:35.000  And what you and I are basically noticing is that it's essentially just like
 
      00:09:40.000  mathematical, I'm going to say masturbation because it's, you find it.
 
      
      00:09:47.000  Like the primary thing that caused the effective altruism movement to be so
 
      
      00:09:55.000  With this long termist movement, they're no longer doing that.
 
      
      00:10:01.000  It's possible because they are obsessing about this thing called Bayesian
 
      00:10:05.000  epistemology, which is a term that works in some circles and does it in other
 
      
      00:10:11.000  But like even right now what I'm saying and what we're about to say is not
 
      00:10:15.000  going to be received positively by people within the community.
 
      00:10:21.000  But they're just playing games with probability and have lost touch with
 
      00:10:27.000  the thing which was so attractive about them in the first place.
 
      00:10:32.000  And so we have to go into long termism and how this actually manifests itself.
 
      00:10:36.000  But this is the thing I think both you and I are noticing.
 
      00:10:39.000  And it is quite I think worrying because of the amount of potential they have.
 
      00:10:47.000  And so we're hoping that this is a slight redirection of like 10 degrees
 
      
      00:10:56.000  But if we don't do this redirection then I think it could start to
 
      00:10:59.000  metastasize into something that's quite quite dangerous.
 
      00:11:02.000  And so we have to go into the details of course and we will.
 
      00:11:05.000  But I hope that kind of set up the conversation both to those who are in
 
      
      00:11:12.000  Yeah the final preliminary note I'll offer is that any scorn that we both
 
      00:11:16.000  have throughout this conversation and any expletives, I think it's safe to say
 
      00:11:20.000  they're directed towards the ideas not towards the people.
 
      00:11:23.000  So the community of people is a group that I just like admire tremendously.
 
      00:11:29.000  And all these people are dedicating many more hours of their life to try to do as
 
      
      
      00:11:37.000  But ideas are not something that deserve our respect.
 
      00:11:40.000  Ideas are something that deserve to be, you know, we should criticize the hell out of.
 
      
      00:11:46.000  We're going to take the beat down to long-termism.
 
      00:11:49.000  And so I just want to emphasize that any sort of scorn that you hear from us is
 
      
      00:11:55.000  And so yeah one thing I wanted to pick up on was the sort of this point you made
 
      00:12:00.000  about mathematical masturbation or like just appealing to mathematical authority.
 
      00:12:05.000  So one thing I realized as well as prepping for the episode is that the community in
 
      00:12:10.000  general has done a fantastic job of throwing out appeals to authority when it comes to
 
      
      00:12:19.000  So typically I would say altruism is dominated by sort of two forms of authority, one being
 
      
      00:12:25.000  So by emotional authority I mean people typically donate with their gut
 
      00:12:29.000  right whatever makes them feel good like oh you know I've seen this person suffering
 
      00:12:33.000  or this organization is close to my heart so I'm going to donate my dollars here.
 
      00:12:37.000  And then obviously religious communities have a lot to do with donation.
 
      00:12:41.000  I think EA has done a good job like stepping away both from those types of authority
 
      00:12:47.000  and trying to get to the bottom of the question like how do we actually do the most good.
 
      00:12:51.000  But in the process it seems to have picked up another dogma, another form of authority
 
      00:12:56.000  and that's mathematical authority and it just seems to me like it's just built its own
 
      
      00:13:02.000  Like it just the forum these days and like the rationality community which is like sort
 
      00:13:08.000  of associated with EA it's just every post is about some sort of like mathematical issue
 
      00:13:13.000  that arises when you're trying to apply expected value calculations to the far future and it's
 
      00:13:18.000  like people are unable to step away from this tool and they can't remember there's just
 
      00:13:25.000  a tool right it's just like expected value calculations and Bayes theorem are just tools
 
      00:13:31.000  that we use in some scenarios to help us reason through things they are not the foundation
 
      00:13:36.000  of our knowledge and they do not deserve to be applied in every scenario but it's just
 
      00:13:40.000  like people are so captivated by writing integrals and summations over possible future states
 
      00:13:46.000  that like we just confused ourselves into literally building our own prison brick by brick
 
      
      00:13:53.000  It's just like we consider that there's a Graham's number number of people that come
 
      00:13:57.000  in the future and it paralyzes every form of argument and it's just like you can't get
 
      00:14:03.000  around it so it's just like I'm trying to argue against this like mathematical Puritanism
 
      00:14:08.000  that has come to dominate the entire debate it's just like tremendously scary because
 
      00:14:14.000  I think the move being made in the paper is not like you know we are an elite who understands
 
      00:14:19.000  that you are not like they lay out the mathematical claims in the paper in a way that if you're
 
      00:14:25.000  if you have a well versed enough background you can dive into but the main type of authority
 
      00:14:31.000  I'm actually worried about here is the authority that is symbols on paper command so if you
 
      00:14:39.000  have enough of a background to understand math then you know then they can point at an equation
 
      00:14:44.000  and say like you know stop questioning the math itself look like the you know the expected
 
      00:14:52.000  number of people in the future is a bajillion quadrillion bajillion and how dare you question
 
      00:15:00.000  the probability calculus right so it's not just an appeal to people like making arguments
 
      00:15:08.000  that they claim you can't understand it's like this pernicious claim to just that symbols
 
      00:15:13.000  that is something about reality that we cannot capture with any other form of explanation
 
      00:15:20.000  and so it's this it's this claim that like mathematics is not simply just represent it's
 
      00:15:26.000  not a tool that's used to represent reality sometimes when it's useful it's that it all
 
      00:15:32.000  it represents something truer than you can even understand and you're not allowed to question it
 
      00:15:36.000  so okay so which paper we're talking about which paper we're talking about and let's let's
 
      00:15:40.000  dive into it yeah so yeah yeah we're talking about the case for strong long-termism this
 
      00:15:46.000  is on the global priorities institutes website it's by hiller greeze and woman caskill and
 
      00:15:53.000  it's a working paper but I think the first draft was published in 2019 it's time of 2019
 
      00:15:58.000  or so and to just get into it so I think the following quote gives people an idea of what's
 
      00:16:08.000  at stake here so the fourth paragraph of the paper the end of it reads as follows the idea
 
      00:16:17.000  then is that for the purposes of evaluating actions we can in the first instance often
 
      00:16:23.000  simply ignore all the effects contained in the first 100 or even 1000 years focusing
 
      00:16:31.000  primarily on the further future effects short run effects act as little more than tie breakers
 
      00:16:37.000  so what they're saying here is if their argument is correct near term effects short term effects
 
      00:16:46.000  do not matter they there's so much moral value captured by all the people in the future that
 
      00:16:55.000  action the suffering and problems now are little more than rounding errors when it comes to the
 
      00:17:02.000  total amount of well-being in in the universe and so I say this so one that quote was read
 
      00:17:11.000  directly from the paper and I don't want to try and score points by just reading things
 
      00:17:16.000  in derogatory tone so I don't take that quote to be any sort of argument but I do want
 
      00:17:21.000  people to understand that the stake of this argument is very high so if this view was widely
 
      00:17:27.000  adopted by just the general public we might just stop caring about for example helping people in poverty
 
      00:17:34.000  or abolishing factory farming or ending injustice that's rampant in the world right so those would just
 
      00:17:41.000  stop being our moral priorities and so it would be viewed as a rounding error
 
      00:17:46.000  negligible in the grand scheme of things utterly to be discarded as a triviality in the same way that
 
      00:17:53.000  you ignore the sixth decimal place of a calculation you ignore the suffering and starvation of people
 
      00:18:01.000  in Africa it doesn't matter and so I'm writing a piece on this and and it's currently undergoing
 
      00:18:10.000  many revisions and in the first draft the tone was quite let's you might as well say sharp
 
      00:18:17.000  and yeah and and so what I said was that this would let's say the full implications of the view
 
      00:18:26.000  aren't either endorsed or really explored by the piece and so the authors at the end of the day
 
      00:18:34.000  the decisions that they're going to advocate are mainly about choosing between different charities
 
      00:18:39.000  but again if you like you said if taken seriously this would allow pedophiles and rapists and
 
      00:18:45.000  murderers to justify their actions by belief that it doesn't matter how they treat people
 
      00:18:53.000  right now right here because they're donating to AI safety and as long as you as long as you
 
      00:19:00.000  prevent AI from taking over the world then you can pretty much do anything you want it does
 
      00:19:05.000  doesn't matter because in expected value calculations when you calculate the expectation of future
 
      00:19:13.000  well-being between now and one could trillion years from now I think is the number they use
 
      00:19:18.000  or one billion years then it all washes out the rape and torture of a child is just a rounding
 
      00:19:24.000  error it really doesn't matter again at no point do the authors ever say this but this is what
 
      00:19:31.000  a world would look like if these ideas are widely believed by everyone and so we have to I think
 
      00:19:38.000  keep that in mind when we go into these very academic illustrations of how they have arrived at this
 
      00:19:48.000  conclusion because of course like Hillary Graves and William McCaskill they're considerate kind
 
      00:19:53.000  people who are trying to make the world better and I think they've just been hypnotized by their
 
      00:19:58.000  own mathematics in such a way that they are not fully appreciating the consequences of what they
 
      00:20:05.000  are espousing and so I just want to echo in my own way that the stakes are really really high here
 
      00:20:12.000  yeah I want to emphasize one more time that we're not making an argument by moral indignation here
 
      00:20:17.000  so I can hear people who are sympathetic to the long-termist thesis screaming all you're doing is using
 
      00:20:23.000  incredibly harsh language to discredit the argument and so I'm not saying we have not made an
 
      00:20:30.000  argument against long-termism yet I need to be very clear about that all we're saying is that the
 
      00:20:33.000  stakes are very high so that's what's on the line right if you adopt long-termism then we're throwing
 
      00:20:39.000  out how much near-term consequences like really matter in the big scheme of things I mean I could
 
      00:20:44.000  just hear the rebuttal right like you know people a thousand years ago are sitting around saying
 
      00:20:48.000  this new utilitarian philosophy demands that we take women's rights seriously that's so absurd right
 
      00:20:55.000  so the argument to like or the argument by moral intuition is often a bad one and one I'm very
 
      00:21:00.000  skeptical of and one indeed that the EA community has been right to be skeptical of interestingly enough
 
      00:21:06.000  so you're right that we should be skeptical of moral intuitions but I have now seen this
 
      00:21:13.000  counter-argument that I'm calling that conflicts with moral intuitions rebuttal because I hear this
 
      00:21:19.000  so many fucking times that you'll say something crazy and then I'll say that sounds a little crazy
 
      00:21:25.000  and then you say well that just conflicts with your moral intuitions and really like don't you know that
 
      00:21:31.000  we used to think that violence was okay and that was a moral intuition and so the crazy thing that I just said
 
      00:21:36.000  that conflicts with your moral intuition too and so really you should just like take a step back and
 
      00:21:43.000  check your privilege to use jargon from a different bubble but all these bubbles have the same kind of tools
 
      00:21:50.000  so that conflicts with moral intuition thing has a counter concern which is that it can be used to
 
      00:21:55.000  squash legitimate feelings of unease that what you're being told makes absolutely no fucking sense
 
      00:22:00.000  what's so bad. Yeah that's a good point. That's a good point.
 
      00:22:03.000  Is a appeal to authority in this way that is like very duplicitous and so I just want to bring that up
 
      00:22:11.000  because I've seen this move made enough times on enough different forums and by enough different people
 
      00:22:16.000  conflicts with your moral intuitions bit is often a way to just get people to shut the hell up
 
      00:22:21.000  when the inside of their voice is saying the inner voice is saying this doesn't make any sense.
 
      00:22:26.000  Yeah that's a good point so it's not it can't be used as an argument for it can't be used as an argument against
 
      00:22:30.000  otherwise you're just committing the anti naturalistic fallacy like the converse of the naturalistic fallacy
 
      00:22:37.000  so some point for it and I mean indeed we should you should take some conflict with your emotional intuition
 
      00:22:44.000  and then you should take some conflict with your intuition and then you should take some conflict with your intuition
 
      00:22:49.000  and then you should take some conflict with your intuition and then you should take some conflict with your intuition
 
      00:22:54.000  and then you should take some conflict with your intuition and then you should take some conflict with your intuition
 
      00:23:01.000  and then you should take some conflict with your intuition and then you should take some conflict with your intuition
 
      00:23:06.000  and then you should take some conflict with your intuition and then you should take some conflict with your intuition
 
      00:23:11.000  but he does talk about common sense a lot and he says that common sense is often the starting place for inquiry
 
      00:23:19.000  but it is itself common sense to know that common sense is often wrong and so it's a good place to start
 
      00:23:27.000  and if something conflicts with common sense that doesn't necessarily mean we should discard it
 
      00:23:32.000  but it does mean that we or that we should keep it it just means that it it we should interrogate it a bit further
 
      00:23:42.000  and the one other piece I want to add to that is that like we all have this little interior light that shines when something makes sense
 
      00:23:54.000  when I say something or when you say something and we receive one another statement it just like makes pieces click into place
 
      00:24:02.000  everyone can feel that it just works and that tiny little light can be extinguished so fucking fast when people are bludgeoned
 
      00:24:12.000  like endlessly that they should never trust their moral intuitions or common sense and they just get used to this like feeling of paralysis
 
      00:24:21.000  that like probably doesn't make sense and everything I've been told doesn't totally click in but everybody else around me is talking as if it does
 
      00:24:31.000  and I it's got to be that I'm wrong and this is just how it is and and and so just if anyone feels this way about expected value calculus
 
      00:24:41.000  and like it's never really clicked into place but everyone else just talks about it so much that you feel like there's something wrong with you
 
      00:24:48.000  it's not and and so this moral intuition this common sense this idea that like things make sense in the world is very important to nurture
 
      00:25:00.000  and to trust in yourself as well and so I just wanted to add that because we're so used to being told endlessly by the EA folks that this conflicts are moral intuitions we can't trust it
 
      00:25:09.000  and then I think we just get a in nerd to the feeling of things ever making sense of all
 
      00:25:16.000  I just wanted to talk about the little light inside you there
 
      00:25:20.000  this little light of mine touch your little light with my little light
 
      00:25:26.000  well yeah one thing I also want to say is like this happens in academia a lot is like someone kind of has an idea and doesn't formalize it and then another person comes along and makes a little bit like a little bit of a change and perhaps gives it a name and what not
 
      00:25:38.000  so I'm not sure like who exactly knows this idea but we should credit at least Nick Beckstead who had a thesis on this and I think this is maybe the earliest formal work on long-termism
 
      00:25:52.000  I do admire people dedicating a PhD to problems they think are important and then I just think it's the role for the rest of us to criticize the hell out of it but people need to generate ideas and that's good
 
      00:26:02.000  so yeah shout out to his thesis but I realize we never said exactly what the main thesis of long-termism is and so we should probably do that
 
      00:26:12.000  so the general idea here is that humans have not been around very long like if you look at the typical length from mammalian species humans evolved relatively recently
 
      00:26:28.000  and so we should expect that humans are species continues long into the future even if you just use the underlying base rates for like mammals and then you take into account that humans have this unique creativity
 
      00:26:42.000  and are able to adapt and construct technology that allows them to thrive in environments and new environments and so then maybe your expectation for how long humans will be around is maybe 200,000 more years, a million more years
 
      00:27:00.000  whatever depending on how creative we are and so therefore because we're so infant in our lifespan as a species most of the people who are yet to live actually come in the future
 
      00:27:15.000  so if I think it's written somewhere you know if if human if they'll I think Toby Orte uses this in his book the precipice like if the human story were a book we'd be on the very first first page and so there are so many millions and billions and trillions of people that come after us
 
      00:27:31.000  that if you value people equally independent of the time in which they live then most of our moral value actually just lies in the future as opposed with the present generation and so if we are truly good impartial axiological utilitarian
 
      00:27:54.000  then we should be mostly concerned with these people in the future right so if there is a choice between helping people now and helping the billions of people in the future then then we should help the people in future so that's why the consequences for helping people now
 
      00:28:10.000  watch out and then when Vadon starts talking about the expected value calculus how that gets imported into this discussion is via the number of people we expect to live in the future so you start taking the probability of us colonizing the stars and being able to live until the heat
 
      00:28:30.000  of the universe or maybe there's the probability of us just lasting as long as the typical mammalian species so for each of these possibilities you generate some ad hoc number of humans this is where my tongue is going to get a little more skeptical for how many people could you know will live in each of these scenarios and you sort of take an average over this and you say well you know when some of these scenarios are going to be tended of 50 more humans that come about or sentient beings
 
      00:28:58.000  not necessarily just humans and some of them only a couple trillion and so you know even on the most conservative of these estimates are expected value calculus gives us that there will be so many people in the future that it's worth mostly paying attention to them
 
      00:29:15.000  as opposed to the people now and it's the expected value calculus applied to different actions you can take right so the idea is that if I drink a glass of water that is going to have like a ripple effect into the future and change the probabilities and the various alternatives
 
      00:29:36.000  if I instead of drinking a glass of water I donate money to charitable organizations that will have a different ripple effect into the future and we can somehow foresee this ripple effect into infinity well enough to summarize it with a single number
 
      00:29:54.000  and we're going to take this number and we're going to use that to make a decision based and say okay well I shouldn't drink water I should instead donate to charity and so the expected value calculus is a way to adjudicate between different decisions
 
      00:30:08.000  that's all I want to say is that the probabilities and stuff are associated with one of many different choices and then the choice you choose is the one that's going to maximize the expected value
 
      00:30:21.000  maybe we should start getting into like what the premises are whether they hold water and then what are the alternatives are like you know we keep criticizing the hell out of this but like what else can we do because my sense is that a lot of people in the community say yeah
 
      00:30:37.000  these numbers are super they're pretty unreliable right like we definitely can't know for certain that in world A there's going to be 10 to the 15 plus 3 people and in world B there's going to be 10 to the 17 people
 
      00:30:53.000  but we have to try to solve this problem because it's so because there could be so many people in the future that even if we're wrong are you know the expected impact will have here
 
      00:31:07.000  you can just keep recursively appealing to the expected value calculus expected impact of anything we can do on this front is so important we can ignore it right this is the problem we must solve I know it's uncertain I know it's hard but by God those 10 to the 17 people need us
 
      00:31:24.000  so we got to focus on that it's like a puffer also in one of our conversations I kept using the phrase like it's impossible but we have to do it anyways it's so important we have to do the impossible and something's got to give
 
      00:31:39.000  and what's not going to give is the impossible but pop or talk about this too because you notice that this was the same injunction that was used to recruit people in intercomism so the revolution is coming
 
      00:31:56.000  listen I know it's hard I know your moral intuitions think it's not going to be coming but it's coming and I need you to join the revolution and come help to bring about the inevitable and you see a similar kind of paradox there right which is come help us do something which is going to be done whether or not you're helping
 
      00:32:15.000  and so you basically have no choice the choices made for you you're either joining the revolution you're helping to bring it about or you're getting in its way and making the world worse the exact same logic is applied here with different words
 
      00:32:30.000  so the consequence of not donating to a safety which is the continual gravitational mass that all of these arguments keeps getting steered towards everything always steers towards a safety so just know that there's this like planetary system and all arguments kind of just get hold over in that direction always and forever in this space but we'll ignore that temporarily
 
      00:32:56.000  the point is either you sit at your desk and do nothing and you're basically complicit with the death of an infinite number of people in expectation obviously because no one could notice that there's so much uncertainty but in expectation
 
      00:33:09.000  you'd be complicit or or if you make the right action then you will no longer be complicit you'll help to prevent the inevitable and so it's the same move different language but it's this like it's so important that you've got to come help and you can't think for yourself
 
      00:33:25.000  stop thinking for yourself here let me give you the tool to think I will give you the way that you should be thinking about this problem either through the doctrine of Marxism or through this mathematical tool called expected value calculus
 
      00:33:37.000  you can totally think for yourself using the tool that I give you and that's how this thing works and notice how easy this justification is to change on the fly right so if someone says oh I'm really unsure about like work on AI safety
 
      00:33:52.000  I could either like go work at the AMF or I could go to AI safety work and you know I don't think maybe AI safety work has like a 0.00001 chance of influencing the far future
 
      00:34:07.000  well then someone can say well let me tell you how many people live in the future it's going to be at least 100 quadrillion times that so in expectation you are in fact saving millions of lives
 
      00:34:21.000  and they say no no but hold on but then that seems even less believable my credence for that has now gone from 10 to the negative 10 to 10 to the negative 11 you're like ah but it just so happened I have good news
 
      00:34:35.000  I have good news for you sir that doesn't matter because in expectation you're still you still have to come help me
 
      00:34:42.000  I think it's not funny but it is funny sometimes I still can't help laughing at it I don't know if it's because I'm laughing at the horror of it or if I just honestly find it funny but so I think you know people are probably especially that are very sympathetic to this thesis are like so close to being fed up with us right now because we're just making fun of cherished beliefs and we're not offering any alternative so let me offer small
 
      00:35:02.000  so so the Pascal's mugging case I'm gonna write a separate piece about it deserves like a slightly longer treatment but I don't want to hide the explanation of like why it's not a paradox and the reason is this is because the you know the mugger is giving you a bad explanation as to so it's on in the thought experiment
 
      00:35:20.000  she either purports to have magical powers or be able to torture an absurd number of people simultaneously depending on which version of the thought experiment you're engaging with and this is just a bad explanation and we throw bad explanations for phenomenon all the time that's how and we don't assign tiny credences to them we don't assign tiny credences to every plausible theory every plausible world we don't do this in science because it's not the way to get to the truth
 
      00:35:44.000  and then once if there's a better explanation for a theory we abide by that explanation and then we work with that until we falsify that one and we have a better explanation but we don't just assign credences to every possible explanation
 
      00:35:55.000  you don't you don't average out relativity yeah string theory and quantum mechanics in order to get the new theory no yes exactly so you know we had a Newtonian mechanics and we have on signing relativity and both of these at some point are competing explanations and so we run experiments and we decide
 
      00:36:10.000  oh there's experimental evidence that Newtonian mechanics does not account for and so we throw it out as the worst explanation we don't still hold it with a credence of 0.25 and it's just not not the way to get to knowledge and then when at some points there's two completing
 
      00:36:24.000  competing explanations right so right now for the theory of everything quote unquote we have quantum mechanics and we have general relativity sort of the two pillars of modern physics but it makes no sense to assign these both probabilities that aside some to one
 
      00:36:38.000  right now what you do is try and criticize the hell out of both of them until one of them fall short and and or until you come up with a better explanation that can account for everything seen
 
      00:36:48.000  and so anyway this deserves like some more treatment but you know the the monger claiming that she can hurt billions of people at once or that she has access to magical dimensions is just a bad explanation and so we don't abide by it and you don't give her your wallet so that's that's the answer
 
      00:37:02.000  we talk about Einstein and Darwin but just it's important to remind people that this applies to every thing you do in your life if you have to go to the supermarket and someone gives you the address but the directions
 
      00:37:19.000  they gave don't really make a lot of sense because you know that it's not where they say it is then that's a bad explanation and then you ask questions and then you ask the question and that is criticism it's it's interrogating the idea
 
      00:37:32.000  it's not an appeal to authority it is a recognition that every human being is capable of understanding the world and making sense and producing ideas that can benefit it and when something doesn't make sense to somebody they receive and should ask for and should not settle for anything
 
      00:37:48.000  less than a good explanation and the insistence that all explanations must come in one and only one form and the form is expected value calculus is so limited I would say limited
 
      00:38:03.000  it should appear troubling to one if one can't let go of something I think and and the expected value calculus I'm surprised to find that this is that it would be like someone not being able to let go of the Pythagorean theorem
 
      00:38:22.000  ask yourself how ridiculous it would be if you made moral decisions based on the Pythagorean theorem if you just computed hypotenuses all day long and then used it to solve questions of like should you go to university should you go traveling
 
      00:38:38.000  and you use computed hypotenuses all day every day this is lunacy this is absolutely this is not helping you solve any questions of substance and expected value calculus is the same fucking thing you're just doing math and that is great and math is a tool like you said but it is not an oracle
 
      00:38:57.000  and once you make the shift in perspective and you realize that this whole ocean of paradoxes that they're just swimming in can all go away instantaneously like you just have to laugh except that you cry because there's millions of dollars
 
      00:39:15.000  so excellent and we should say like this is David Deutsch and Karl Popper and David Miller and so there is a tradition of this this is not is widely known and that's part of the reason why we are doing this podcast
 
      00:39:29.000  basically yeah I mean but one thing that decides is like yes like Deutsch and Popper they put their finger on this is like this is how we proceed but people as you say people do this in day to day life this is how we know how to reliably generate knowledge right what we don't do is assign
 
      00:39:42.000  the credence that someone was lying about the supermarket address and then also the supermarket possibly changed addresses in the last day or the last year or that it's been painted or something or that we're living in a simulation to
 
      00:39:53.000  or yeah again the simulation argument falls out immediately it's a it's just a bad explanation explain everything yeah all of these paradoxes which bostrom and
 
      00:40:02.000  you koski just churn out like minute by minute most of them just immediately fall by the wayside because they just make up fantasy worlds that don't explain anything it explains everything and that's why it's a bad explanation
 
      00:40:17.000  and so yeah this is what I mean by being hoodwinked by the math right like if you only abide by math yes these actually become real problems that you have to sit down and scratch your head over and write papers about but if you like step outside your mathematical toolkit and ask like
 
      00:40:30.000  wait how do we resolve how do we generate knowledge normally like what's our most reliable way to generate knowledge it doesn't have to be in mathematical form right we just like seek better and better reasons for believing things
 
      00:40:41.000  in the just in the piece that that like putting together I talk about David Foster Wallace at this point because his brilliant Kenyan college commencement address is titled that this is water
 
      00:40:56.000  and he describes a little parable of like I'm sure most people will be familiar but just for those who aren't to fish like swimming down the ocean stream and then the big older fish is swimming back at them
 
      00:41:08.000  and the older fish says morning boys how's the water the fish swab on a little longer one term ceiling says what the hell is water and the water here is the framework it's the network of assumptions and
 
      00:41:22.000  beliefs that you hold that you're constantly swimming in that is really hard to notice when it's everywhere all around you at once that's why the water example is a useful one
 
      00:41:33.000  but once someone points it out then you can't not see it pops out like a 3D image and it's so striking once you recognize this to see an entire community of people all just drowning when all they need to do is like move to the land
 
      00:41:51.000  it is so perplexing because all of the criticism which they anticipate is all still within the same framework which is quite surprising because Hillary Graves has co-published with David Wallace she has written extensively on David Deutsch like she knows these people
 
      00:42:08.000  and yet just complete refusal to try something new to consider a different approach to consider the paparion Deutsch in world view or even just not even you don't have to go that far just try something it's not expected that you can't just just try that
 
      00:42:27.000  and just try to get to see how reasoning goes because as you said it is so intuitive it's innate within all of us the thing that makes paparion Deutsch like stand out is that they just describe what we do all the time it isn't a weird theory that we have to conform to it is a description of what we are already doing and then once he was recognized that human beings have this ability to produce new knowledge and that this is just simply description of that
 
      00:42:56.000  you can identify when people are deviating away from that course or if they are hewing to it and undergoing all the struggles which come from trying to produce new knowledge and it's hard criticism is a pain in the ass it sucks it's not fun to be criticized
 
      00:43:12.000  you can write something you're proud of it and then people criticize it sucks I get criticized on this show every fucking time I was then changing my mind it's the worst every week is like another fucking existential catastrophe for me god damn it
 
      00:43:25.000  this is the process and then you seek it out and it becomes like who you are and what you require from your friends and your partner and your from everyone criticism and the desire to improve one another mutually forever then you just see a bunch of people standing around the circle shooting cells in the foot forever and you you're trying to help but they just start shooting at you
 
      00:43:53.000  and so yeah so I don't want to leave you know I don't want to end part one without at least saying what at least my thoughts are on long termism and like what the other framework is and the other framework is put simply it's don't try and measure that which is
 
      00:44:10.000  not measurable so right now there is this tendency as you said to say but there could be so many future people we have to try no you don't you don't have to try and indeed the way we we don't accept this kind of reasoning in any other aspect of life in any other scientific endeavor right we don't try and predict things thousands of years out
 
      00:44:34.000  because we know we don't have the knowledge to do that and so the best thing we can do is work with the present and try and make the future better that way so one thing I want to emphasize here is if you throw out this if you throw it long termism is a philosophy you don't actually have to throw out the care for future generations
 
      00:44:54.000  so there is a place to stand where you say we are concerned about people in the future and we do give them moral weight so if you know Bob is going to exist in the future you don't have to discount his suffering but having concern for is different than prioritizing
 
      00:45:11.000  it's different than saying we have to work on that because it's so important because the answer is working on that cannot possibly be effective because we don't know anything about the future we don't know what problem situations is going to be facing
 
      00:45:26.000  and I don't know what society is going to look like how we generate knowledge is to incrementally work with the problems we have now solve those problems those problems generate new problems and we work to solve those trying to predict things out a thousand years
 
      00:45:40.000  and so what's so bizarre is they recognize this in the piece right so section 3.1 points out like life has gotten so much better and I think they even use the word incrementalism I think they even you know they say every generation we've improved a little bit we've improved a little bit
 
      00:46:03.000  why has every generation been able to do that because they solve the problems at the time because those are the only problems they know how to solve if we had stopped trying to make progress right after the enlightenment and said let's try and predict what the year 2500 is going to look like we would have just stood at a standstill
 
      00:46:19.000  and then to say well so one move that I see made a lot is well here's how we improve the long term future we should improve things like the scientific process or political institutions or not trying to create war with great powers
 
      00:46:36.000  this is a non answer you don't have to adopt long termism to think these are important problems this is what we're trying to do all the time collectively we're always trying to improve political institutions we're always trying to improve scientific process we're trying to speed up the rate at which we solve problems
 
      00:46:50.000  this doesn't require adopting long term is a term is a framework can I pause you I missed what the move was can you say that again can you set up the situation and then what the move is I know you said I just missed it and so yeah so I see so there's this so in general right there's like the philosophical claim
 
      00:47:07.000  that what matters most future generations but then you can start drilling people on like what does it mean to care about that right so what sort of action should we take and this is where the paper spent like you know five or six pages sort of cataloginess
 
      00:47:21.000  so most prominent reason is like well we should work to reduce the threat of advanced artificial intelligence because this could plausibly take over the world we've spent like you know an episode or two talking about that so I don't want to dive into that
 
      00:47:36.000  but I think that's why we're trying to do that and I think that's why we're trying to do that and I think that's why we're trying to do that and I think that's why we're trying to do that and I think that's why we're trying to do that and I think that's why we're trying to do that
 
      00:47:55.000  and to adopt those sort of solutions sort of think those sorts of problems are worth solving doesn't require adopting long termism is a framework so you know the thing I want to emphasize is like you actually like by right now as EA currently stands in the problems
 
      00:48:09.000  and like the problems it prioritizes they don't actually require long termism is a framework it's like we're in this weird uncanny valley with this problem where like the actual like things that people are proposing to work on mostly don't require long termism except possibly AI safety
 
      00:48:23.000  but again we should borrow that for another time but the rest of it are actually very real concerns like you don't have to you don't have to buy that we should only care about people a million years in the future to worry about nuclear threat
 
      00:48:36.000  to worry about climate change like you know you don't have to buy these and so what what I'm really criticizing here is the possibility of people developing new ideas based on this philosophy that throw out any of these future concern or current concerns
 
      00:48:53.000  in a year of like stuff farther down the road so I've read this like bizarre place where this philosophy is really weird but the actual practical implications of it so far aren't insanely out to lunch so that is exactly okay so that is a really important point to I think
 
      00:49:08.000  the philosophy is making all these arguments that what we should care about are people in literally a billion years from now that's not a straw man they a billion years from now and then from this they come up with the number ten to the fifteen which again they just came up with big numbers
 
      00:49:33.000  but the conclusions that I draw from this is you may want to reallocate some of your like charitable portfolio, charitable giving portfolio to some causes which are slightly more long term focused
 
      00:49:55.000  I can get there without doing any of that Bayesian expected value calculus just by saying it makes sense to distribute your portfolio slightly so that you care about the future as well like the future you're going to be there
 
      00:50:08.000  and so you want to have a wide portfolio you don't need to do all this nonsense but what you do need to do this nonsense for is the apocalyptic AI doomsday scenarios and so I don't think we can actually divorce this subject as much because
 
      00:50:25.000  the only causes which actually require this nonsense are those which don't have good explanations that could be used instead so I want to emphasize two points one is that the explanation is primary perspective is a unifier of morality and science and progress in general because explanations apply just as much to morality and moral claims as they do to scientific claims
 
      00:50:52.000  so it is a unified view and the second point is that when good explanations aren't available you basically have two options you can pivot your focus because something's wrong or you can start generating terrible ones terrible explanations because your focus you're not going to pivot that at all
 
      00:51:17.000  and what this produces are arguments like long-termism which aren't needed in say defending climate change research defending anti-nuclear terrorism defending all sorts of these ideas which we can come up with good explanations for that instantaneously we don't need any of this nonsense
 
      00:51:39.000  but we do need it for these scenarios which they draw out in the piece and so that should put people on high alert I think that when people have to default to the long-termism claim it's because they don't have a better one that they could use
 
      00:52:00.000  they shouldn't need this for issues which are pressing an immediate or within a generation or two so it should put people on guard because it is the default move it's the one that you can always turn to when all other explanations fail
 
      00:52:20.000  you can pull this out of your pocket and talk about expected values till the fucking cows come home because you can always do this it doesn't have any requirements to be constrained by what we know about global poverty what we know about geo politics and the likelihood of governments forming a world
 
      00:52:39.000  that radical work to work but yeah like so where we have real knowledge where we have actual information you tend to get much better arguments coming coming out and so I just want to caution people who are hearing all this stuff for the first time that it should be seen as a signal that there's not something better underneath
 
      00:52:58.000  that's a really good point I think because the explanation is the unifier of all this stuff right like it doesn't just apply in science it supplies and morality as well why we should care impartially about people in different countries is a good moral explanation and that's why we take it seriously
 
      00:53:12.000  it's not because we have some credence in it it's not because we justify it with numbers it's because we have a good explanation so why that's the case and we seek good explanations too and we should continuously seek them from other people but yeah so like where are we introduced the main thesis
 
      00:53:30.000  we haven't really explained how they argue for it and the paper introduces the claim then it talks about what would change if people endorse the claim focusing mainly on charitable giving but then they do say that they expect it to influence like all sorts of decision making not just charitable giving which is where
 
      00:53:54.000  again it gets quite frightening yeah they say quote we believe that axiological and deontic strong long termism which we're not gonna I don't think we're gonna differentiate between that is something that's worth it
 
      00:54:05.000  are of the utmost importance if society came to adopt these views much of what we would prioritize in the world today would change and so any idea that purports like change our value system quite radically deserves to just like be scrutinized
 
      00:54:20.000  so yeah and I feel like in the EA community right now there's just not enough criticism being generated of the idea or at least the criticism is only of a particular sort where people argue about the precise probabilities you can place on certain estimates or something
 
      00:54:37.000  there's much coming externally to that framework yeah so so like I think you have a much better sense of the kinds of conversations that are happening within the community and I would just love to go into that if we can like what kinds of arguments are being generated
 
      00:54:56.000  is like it's just so apparent to me that this is a recipe for disaster and it's not there yet of course not but but telling people they don't have to really care about how they treat this generation the next generation or generations about 1000 years from now doesn't really matter
 
      00:55:13.000  is like it's not strong long termism it's like sociopathic long termism that's very very dangerous so I mean yeah so the consequences certainly aren't spelled out like that the framing is much different so the framing is something like there are
 
      00:55:28.000  there's extraordinary suffering in the world today and we could work to try and alleviate this you know there's the suffering of animals who are suffering of humans and there are a variety of means we could use to try and alleviate the suffering
 
      00:55:43.000  but there's this other group of people who is completely ignored by in most moral frameworks and in our any political discussions and those are people who will live in the far future
 
      00:55:58.000  and this group of people from you know like a social justice standpoint even is completely neglected right so if we think there are like some groups of disenfranchised people today the people of the future have absolutely no say in how they will be treated by us right they aren't representing our political processes
 
      00:56:17.000  they aren't taken into account in most moral frameworks and so the position really is one of compassion I think by most people in the community right so it's not that they ignore they want to ignore the suffering in the present right even though that happens to be a consequence of the view
 
      00:56:35.000  it's that they're looking at the suffering and they're cringing and they're thinking should I do something about this and then they're weighing it against this other group of people which they see is more vulnerable
 
      00:56:48.000  right so they see a larger group of people is more vulnerable and so they're saying like I know it feels I know it feels wrong to ignore the people who I could be distributing bed nets to but there's this argument that tells me that there's going to be even more people
 
      00:57:04.000  I could help in the future this is and so all such ideologies and I say the word ideology not as a term of disparagement but as a way to classify this idea amongst similar other ideas so ideologies tend to evolve certain defense mechanisms to make them impervious to criticism
 
      00:57:26.000  and all of them tend to have compassion at the root the compassion for the working class compassion for the poor Aryans who are being overtaken by the Jews compassion for the Muslim populations who are being constantly bombed by the United States
 
      00:57:42.000  compassion is always there or at least there in heavy helping this is what Paul Bloom was so excellent when he talked about the problems with empathy right because it can be so easily turned in slightly the wrong direction and people will run off cliffs thinking they're making the world better
 
      00:58:00.000  this is this is always going to be with us and at least with this long-termism thing it feels like so like if I had the opportunity of like waving my wand getting rid of postmodernism after the first publication is preventing this whole disaster of philosophy
 
      00:58:14.000  I would love to do that but I anticipate long-termism is being close to that and so I feel like it is just on the cusp of like doing the same to charitable giving is what postmodernism did to philosophy and all of the humanity and stuff in general
 
      00:58:31.000  so anyway the fact that compassion is at the root of it makes perfect sense of course it is of course it is maybe the thing that differentiates it is often it's compassion for people who are alive right now and then you're promised utopia in the future
 
      00:58:46.000  and in this one it's compassion for people who don't exist in this compassion for people who are 5 billion years ahead of us and so it's still channeling the compassion sensors in our brain is just a different flavor of it
 
      00:59:00.000  and so I think that's exactly and some pieces try and make it more real they try and hijack this empathy module that we have so Nick Boschman wrote this essay called letters from utopia where we were being written to from people in the future
 
      00:59:16.000  don't mess things up because we have reached a state of utopia in the future and things are perfect and we experience constant bliss all the time but for us to actually reach this state you are required to make the right choices in the present today
 
      00:59:31.000  and so it's this heavy moral weight I think the people feel in the community where it is hard of course not to help those today but they do feel like you're bi lowering extinction risk by 0.0% or 1% or that number comes from you are helping
 
      00:59:50.000  many more people than you would if you were to donate to the against malaria foundation or something so yeah definitely coming from a good place which makes the idea even harder to criticize
 
      01:00:04.000  because there's no there's no evil scapegoat that you can point to that's like obviously steering people wrong there's just bad ideas that very good people have started to adopt they can steer them in wrong directions and I guess yeah this is most ideologies
 
      01:00:22.000  no one is the myth of pure evil to think that there's just some person at the top who's like controlling everything just in order to laugh at everyone everyone always thinks they're in the right when they adopt extreme positions
 
      01:00:34.000  okay so there's two premises that I think that get the thesis off the ground to I think what they would claim would be like necessary and sufficient conditions
 
      01:00:46.000  the first one is that quote in expectation the future is vast in size vast meaning the number of humans or sentient beings to come after it the second being that all consequences matter equally
 
      01:01:02.000  with equality here referring to overtime so temporally all consequences matter equally so whether if Alice suffers by 0.5 amounts in right now and Bob is suffering 0.5 amounts in 150 years those are indistinguishable from an ethical standpoint
 
      01:01:21.000  and so we should be equally concerned with both of them so I think you have done the most thinking about the actual mathematical aspects of the claim in expectation the future is vast in size you wanna
 
      01:01:33.000  so what I should say is this is what the majority of the little forthcoming piece is gonna be on and so anything that I say here should basically be heard after you read that piece because it's gonna be hard to keep in mind without having some visual aids
 
      01:01:55.000  so I think that in expectation the future is undefined and so that's basically the main thrust of my argument for that part for the second claim which says that we shouldn't prefer things to happen sooner and that's also totally wrong so I'll do that one in a second but
 
      01:02:09.000  in expectation the future is undefined what do I mean by that so what I mean is when you go to do an expected value calculation what you're doing is you're multiplying a probability by a utility and then the utility is conditioned on the action
 
      01:02:24.000  both sorry and probabilities conditioned on the actions as well the thing though is that in this paper and I think all of the work they don't just like actually write out all the symbols so you can see all the various interlocking pieces
 
      01:02:37.000  when you compute probabilities you compute a probability over a set of things so let's talk about infinite sets for a second we're getting super technical because we have to go down into details to argue against this stuff
 
      01:02:51.000  yeah before you go down into details I want to inoculate us against a form of criticism here which is that I could see someone saying okay but you know you're taking this to technically you're taking this to literally like we're just talking about what will probably happen
 
      01:03:06.000  no you are not allowed to make that move so you can also talking about expectations and expected futures and expected futures on repeat if you're not ready to rigorously define it
 
      01:03:16.000  there's there's there's you know if you're going to start using terms like expectation actually reasoning based on expected value calculus you have to go down the rabbit hole of like what are you measuring what's your probability measure is the set of futures
 
      01:03:29.000  you're using those tools you start you started this debate right so I just want to make sure that there's no there's no room to stand where you're saying they're getting too technical you cannot that's that's that's an illegitimate
 
      01:03:43.000  I like that another way to say that is we didn't bring the math into the fun exactly like again we'd like math math is great we just know it's a tool that can be used sometimes but anyways so we have to get technical because that's what they are asking it so
 
      01:03:57.000  I'm going to use the measure theoretic view of probability because that is one that all mathematicians use and let me just say for a second that like what do you think mathematicians are doing all day they're figuring out when you can and cannot apply probability
 
      01:04:10.000  like that it is not just this arbitrary thing you can fucking throw everywhere you like there is very careful conditions for when you can apply probability and when you can't and the whole reason we have measure theory the reason we talk about sigma algebra's and stuff is because you can't just apply
 
      01:04:26.000  probability all willy nilly to infinite sets if you try to do that everything will break and that's why we have sigma algebra's and that's why we have all this stuff so I'm math is fucking hard
 
      01:04:36.000  that's why math is hard and if you don't know what I'm talking about when I say the word sigma algebra then you should be talking about expected
 
      01:04:42.000  yeah exactly okay but so let's just consider an infinite set for us we're gonna have to put a fucking chapter marker in here so that people don't want to get skipped in
 
      01:04:53.000  again we didn't bring expected values into the conversation so I'm just describing how it works okay so if you want to apply the notion the very notion of probability to a set of things the set of things that you apply it to has to abide by certain conditions
 
      01:05:13.000  okay what does that mean the technical jargon term would be that it's a measurable set but that doesn't if I say that that would be an appeal to authority because I'm just using a word which I know people don't understand so I'm not gonna say that it's just a measurable set and then pretend like that answers the question because it doesn't
 
      01:05:30.000  let's talk about a set that's not measurable for example that will make the point stick so I want the listener to imagine an alternating sequence of red and black balls this goes red black red black red black
 
      01:05:48.000  okay then I ask you what's the probability of drawing a red ball and the obvious thing that you would say is it's 50% because they're alternating red black black
 
      01:05:59.000  turns out that's not true that's wrong it's wrong because it's an infinite sequence so what you can do is you can take all the black balls like let's say take five black balls from the future notice I'm using the word future
 
      01:06:11.000  because you can draw from it an infinite amount take the five red balls put the next to a black ball take another five red balls put the next to a black ball five more put the next to a black ball and now but originally looked like an alternating sequence
 
      
      01:06:28.320  Black, black, black, black, black, black, black.
 
      
      01:06:32.840  Um, so now what originally looked like a 50% probability
 
      01:06:37.260  is a one in six, it is, um, little chunks of six sequences.
 
      01:06:42.320  Um, so the point of this though that if a set isn't measurable,
 
      01:06:47.000  what that means is you can pretty much define probability,
 
      01:06:50.300  how are the fuck you want, by rearranging stuff
 
      
      
      01:06:56.160  but I'm just going to stay right here and keep it technical.
 
      01:06:58.840  Um, so the main point though is if you have an infinite set of stuff,
 
      01:07:03.040  let's just say two kinds of stuff, uh, it doesn't make sense at all
 
      01:07:06.400  to talk about relative frequencies, um, because you can make the relative
 
      
      01:07:11.280  You can just move, um, a bunch of white balls next to a black ball, uh,
 
      
      01:07:16.200  This is why, um, you can't, for example, talk about what the probability of an
 
      
      01:07:23.440  But that doesn't make any sense, uh, because you can rearrange the natural
 
      
      
      01:07:27.960  This is the thing which mathematicians are, um, very scrupulous to avoid.
 
      01:07:35.440  The whole point of math is to say that when you can just make arbitrary changes,
 
      01:07:40.160  um, then, uh, this is no longer a mathematical concept.
 
      
      01:07:48.920  And this is like why measure theory and sigma algebra is we're invented in the
 
      
      01:07:54.440  They're invented because there's something called the Benak Tarski paradox.
 
      01:07:58.800  I'm actually set that out loud, um, which basically says that you can take a set.
 
      01:08:02.560  I think it's like Euclidean set of like four, uh, three in greater dimensions.
 
      
      01:08:08.800  Um, and if you have a sphere in that set, you can rearrange the pieces to get two
 
      
      01:08:14.600  And so, uh, if you just allow arbitrariness into your system, then you can produce
 
      
      01:08:20.720  And I think this is called, uh, the law of contradiction or there's a word for this,
 
      01:08:26.160  that as soon as you allow one contradiction or one absurdity into your, um, uh,
 
      01:08:31.080  framework, then you can derive any absurdity you like whatsoever.
 
      
      
      
      01:08:38.160  We've learned that, um, we can't be all really nilly when we apply probability
 
      
      
      
      
      
      01:08:45.640  Let's consider now the set of all possible futures, um, which is what these
 
      
      01:08:51.640  Uh, and if they're not over that, then I don't know what people mean when they say
 
      
      01:08:55.080  Um, I, I've actually asked, uh, I, I, like, I confirmed this with you.
 
      
      01:09:00.680  But even if you, like, if you're not sure, then that is self indication that people
 
      01:09:04.920  aren't talking seriously when they use this word.
 
      
      
      01:09:08.800  I mean, they say, I mean, they need an expectation the future is that
 
      01:09:11.880  sometimes if your explanation is not referring to the future, then I don't know.
 
      01:09:16.280  It's presumably your, your, your averaging over, um, a bunch of different, uh, future
 
      
      01:09:23.080  Um, there may be a move of going to the multiverse here.
 
      01:09:27.040  Uh, I know that, um, Hillary Greaves, for example, has, has done a lot of work on, um,
 
      
      
      01:09:35.440  It doesn't apply to the future because of the argument from epistemology, um, that
 
      01:09:40.400  you can't predict the future when the future depends on the contents of future
 
      
      
      01:09:49.120  The word is reach if you want to, uh, go into David Deutsch's, uh, book, but, um,
 
      01:09:53.000  even in the multiverse interpretation of quantum mechanics, there are limitations
 
      
      
      01:09:59.000  And so the recourse to, well, the multiverse will get us out of this problem.
 
      
      
      01:10:05.080  Um, but anyway, uh, so now let's consider the set of all futures.
 
      
      
      
      
      01:10:16.280  We don't know what, like, what, what imagine considering the set of all futures
 
      
      
      
      
      
      
      
      
      
      
      
      
      01:10:37.200  Not like, you don't go like one billion, cajillion years into the future.
 
      01:10:41.040  Just go like 10 years back in the past and see if you could predict the stuff that's
 
      
      
      
      
      
      
      01:10:53.920  So the bottom line is that people are being so sloppy with their definitions here.
 
      
      01:11:00.200  Like in the little piece, I made a figure and it's obviously sarcastic, but it.
 
      01:11:06.640  I think actually represents the arguments of long termism.
 
      01:11:10.560  And like the listener should imagine a scatter plot and you draw a trend line and on the x
 
      
      01:11:20.800  Um, and on the y axis, it is cumulative expected value.
 
      
      01:11:27.640  Uh, and then you can just arbitrarily say cumulative expected value of action A is
 
      01:11:32.680  taller, larger, the arrows higher than action B.
 
      01:11:35.880  Except this scatter plot doesn't have any data on it.
 
      
      01:11:40.000  You've just drawn lines on a fucking wall and you don't realize that when people
 
      01:11:43.920  actually use them to really solve problems, they are so scrupulous.
 
      01:11:47.760  They never just throw the word expectation in front of stuff and call it a day.
 
      01:11:52.360  They are very careful to be sure that it applies in this circumstance.
 
      01:11:58.480  Um, this is as ridiculous as talking about the set of all dreams or the set of
 
      01:12:02.360  all ghosts or the set of all sneezes, the set of like, you can just say this.
 
      
      01:12:08.280  I'm going to talk about the set of all dreams and then I can talk about the
 
      01:12:11.160  expected dream that we have where I'm going to take the probability of each
 
      01:12:15.120  possible dream that Ben has and I have, and I'm going to multiply it that
 
      01:12:19.160  probability of that dream with the amount of, I don't know, sex that I have in that
 
      01:12:23.440  dream and the expected amount of sex that I have is like, this is infinite.
 
      
      01:12:30.680  Like we're not being at all careful with how these concepts are used.
 
      01:12:34.400  Um, and so the only sensible thing that somebody can say about the
 
      01:12:39.880  expected value of the future is that it's undefined.
 
      
      01:12:43.320  It is a poorly defined concept and therefore it shouldn't be used to make
 
      
      01:12:51.360  Um, and we should just laugh at it because it is ridiculous and it deserves
 
      01:12:56.520  to just be seen as a good attempt and a mistake and we need to try something else.
 
      
      
      
      
      01:13:07.040  I got to say you have a very good, you have a very nice ability to explain
 
      
      01:13:12.600  I had never heard someone explain certain paradoxes the way you do, but I just
 
      01:13:17.160  want to emphasize that this is very standard math.
 
      01:13:19.080  Like if you ever sat in like a intro to set theory class and gone over the argument
 
      01:13:24.000  that the set of even numbers is the same has the same cardinality as the set of
 
      
      
      
      
      01:13:37.360  This is like very basic, just like counting, you know, what sizes have
 
      01:13:43.200  difficulties when you start dealing with infinities.
 
      01:13:45.280  And the whole, and the whole point, the whole reason that this mathematical edifice
 
      01:13:48.920  and the, there's so much work to be done is because mathematicians are avoiding
 
      
      01:13:54.360  The whole point of this is that they recognize that if they introduce one false
 
      01:13:58.840  assumption, one, that if they slip up even once, then everything they've built
 
      
      01:14:05.600  Um, if you introduce a paradox, you can just produce nonsense all day long.
 
      
      
      01:14:11.200  Um, and that's why there is a discipline of mathematics is very careful.
 
      
      
      01:14:18.440  And I haven't heard any good justification for this stuff.
 
      01:14:20.800  It's just this spiral down into this well of publications, uh, from which no one
 
      01:14:26.880  emerges, um, and you just get sucked into this vortex of terminology and labels.
 
      01:14:32.240  And you lose all bearing on what the fuck is going on.
 
      01:14:35.440  But yeah, if I listen to, I has a reference for like where this is precisely defined,
 
      
      01:14:40.680  Um, I have not read Nick Beckstead's entire thesis.
 
      01:14:43.600  So possibly it's hidden in there, but, um, that's a fantastic point.
 
      01:14:47.000  Here's my falsifiable claim that it is a citation vortex.
 
      01:14:52.160  Um, and that everyone's citing everyone and there is actually no definition at the bottom of it.
 
      
      
      01:15:00.160  If there is a concept that allows us to see one billion years into the future, then
 
      
      01:15:08.520  This would change everything that we know about physics and science and epistemology.
 
      01:15:13.040  And it would be as revolutionary to claim as traveling faster than the speed of light.
 
      
      01:15:20.880  It's like when people, um, just cavalierly talk about homeopathy and they say, oh, yeah, it could
 
      
      01:15:26.840  And it's like, if it was true, if the more you dilute something, the more powerful it got, then
 
      01:15:32.080  everything we know about modern physics and chemistry would be completely wrong.
 
      
      01:15:37.680  And now, obviously, if it's true that we can see one billion years into the future, using
 
      01:15:41.840  expected value calculus, then that would change everything we think about time and space.
 
      
      01:15:48.640  I am decently willing to bet that it's just a mistake.
 
      
      
      
      
      
      
      
      
      
      
      
      01:16:09.000  So the second claim is all consequences matter equally.
 
      01:16:12.120  So what I understand them saying here is can be sort of summarized as follows, like picture the
 
      01:16:18.680  universe as a block universe where you have like the three dimensions of, uh, sp, three dimensions
 
      
      01:16:26.040  Um, and then imagine these three dimensions expanded over time.
 
      01:16:30.000  So you're sort of like, you have this three dimensional axis and then you drag this out over time.
 
      01:16:35.600  And so you're given this substance that you can like point at any one of these, at any one of these
 
      01:16:40.600  times and like reduce all the suffering or maybe increase the wellbeing of everything or if every
 
      01:16:48.440  sentient being like existing at that point in time.
 
      01:16:50.600  And so I understand sort of the intuition that this paper is playing with of like, where should you
 
      01:16:56.920  point, where should you distribute that substance?
 
      
      01:17:00.400  And so in the future, um, by this sort of expected value calculus, there's going to be way more
 
      
      
      01:17:09.920  You shouldn't distribute it now, even though you could cure ills now, you can increase wellbeing now,
 
      01:17:15.560  but you're going to help more people if you do it in the future.
 
      01:17:19.080  And so it may seem callous, but this is a simple utility maximization problem, right?
 
      
      01:17:27.320  We have the substance and we need to figure out how to best distribute it over time.
 
      01:17:31.320  Um, and so we need to, you know, you need to, you know, what's going to be more people than time.
 
      01:17:35.640  We don't exactly when we should distribute the substance, but we kind of know it shouldn't be now.
 
      
      01:17:39.840  Um, and then we'll leave the vagueness of when we shouldn't distribute it later, or maybe in the
 
      01:17:44.240  meantime, we try and figure out how to create more of the substance or something like that.
 
      01:17:47.120  Um, this is a terrible way to think about consequences.
 
      
      01:17:53.200  So I'm going to reiterate that there is a place to stand here where you do care about future
 
      01:17:57.600  wellbeing, but the question is, how do we actually affect future wellbeing?
 
      01:18:02.320  Um, and the answer is not to wait and just withhold all our action thinking we know everything we
 
      01:18:10.720  need to know right now and then try to make the world a better place later.
 
      01:18:14.640  The answer is trying to solve the problems that exist right now, because those are the
 
      01:18:20.440  pressing problems and the only ones we actually have a hope of solving, because they're the only
 
      
      01:18:25.480  We don't even know what the problems will be in the future.
 
      01:18:27.960  And so the way we've made progress so far and the only way we know how to reliably generate
 
      01:18:32.600  knowledge, um, is to solve the problems right now, um, with coming up with like the best
 
      01:18:39.080  theories, the best explanations we can, uh, we apply those right now, those generate new
 
      
      01:18:45.800  And incrementally, we figure out more and more about the world that we improve our
 
      01:18:49.480  moral systems, we improve our science, we improve our technology, and we just increase the
 
      
      01:18:54.600  It's, it's, it's sort of the difference between like a passive and active role that people
 
      
      01:19:02.280  So I get the sense of this paper sort of takes up.
 
      
      01:19:05.800  It's like, we're just, you know, we're witnessing, witnessing history.
 
      01:19:08.760  We have the ability to distribute some wellbeing at any point in time.
 
      01:19:12.440  And we have to figure out when is the best time to distribute this wellbeing?
 
      
      01:19:17.960  The question is, how do we reliably keep making progress?
 
      01:19:22.120  And the way we keep making progress is to keep solving the problems that we're presented
 
      01:19:26.600  with, because those are the only problems we can even have a hope of, of solving.
 
      
      01:19:30.800  So we just, we need to keep alive, um, a quote unquote tradition of criticism, if you
 
      01:19:36.480  want to use Popper's language, or we need to keep the method of error correction alive.
 
      
      01:19:42.760  What, whos ever language you want to use the point is that there's problems right now.
 
      01:19:47.680  Those, the only problems we can solve and solving those problems generates all
 
      01:19:52.200  kinds of knowledge, moral knowledge, technological knowledge, scientific knowledge, artistic knowledge.
 
      01:19:56.360  Um, and sort of reliably make the future better.
 
      01:20:00.880  The only thing we can do right now is act on the problems we know right now.
 
      01:20:04.320  The big glaring thing that I think this is missing, that they really have not
 
      01:20:09.200  addressed at all, is that the future and the present are completely different.
 
      01:20:13.000  The future never arrives, but we're always in the present.
 
      01:20:17.160  So we're always moving in the present, but there will always be a 1000 year
 
      
      
      01:20:24.920  It's because they're saying you can completely and utterly disregard how you
 
      
      
      01:20:36.000  At any point in time, there will be the next 1000 years.
 
      01:20:39.440  And so this is a straightforward justification to treat people cruelly forever.
 
      
      01:20:48.720  We always have the next 1000 years and they're telling people that you don't have
 
      01:20:53.320  to care at all about the consequences of your actions in the next 1000 years.
 
      01:20:59.320  And so this is just encouraging cruelty forever.
 
      01:21:04.800  And, and this is why we absolutely should have a preference for the present
 
      
      
      
      01:21:14.760  Um, and now is the only opportunity we have to make the world better.
 
      01:21:19.360  And this is exactly your point, which is that we know how to make the world
 
      01:21:24.120  better and that is by solving problems and gaining knowledge and working towards
 
      
      
      01:21:31.440  Um, the assumption that like is underlying this claim that we shouldn't
 
      01:21:39.920  prefer good things to occur now is that we're just like these beggars with our
 
      01:21:46.480  arms outstretched waiting for good things to be like poured into our bowl.
 
      01:21:50.800  We're just like mendicants just waiting for the, the manna to be just distributed
 
      01:21:56.840  to us rather than being active participants, rather than being those who
 
      
      01:22:01.760  Like we are the ones who can produce the good things.
 
      01:22:04.160  Um, and the good things come only through human effort and human, uh,
 
      
      01:22:11.520  Um, and if you tell people that you shouldn't prefer now, then later than how
 
      01:22:17.760  defeatist is this, how much does this just strip people of the desire to try to
 
      
      01:22:22.880  Um, and so the very act of telling people this is the thing, which is going to
 
      01:22:28.120  prevent the good things from coming in the first place.
 
      01:22:30.280  So of course, of course we should prefer the present because we are only ever in
 
      01:22:35.440  the present, um, and just try patiently to convince people to, uh, work on
 
      
      01:22:44.680  Um, and if every generation does this, then this will be a great world, um,
 
      01:22:49.280  where every generation tries to make the world a little bit better for the
 
      
      01:22:54.680  This is the most moral and humanistic philosophy I could possibly, um, uh,
 
      01:23:00.920  espouse and long-termism is the negation of all of that.
 
      01:23:04.720  Fascinatingly, so Toby Orrde in the precipice, which is arguing for this,
 
      01:23:10.520  like long-termist philosophy, um, you know, he makes the argument that we've
 
      01:23:15.800  been handed down, um, progress and values from like past generations.
 
      01:23:22.320  And it's our job to improve on this and continue this tradition onwards.
 
      01:23:26.360  Um, but ironically, it's only because these past generations were focused on
 
      01:23:31.360  solving the problems of their time because they knew nothing of our problems,
 
      
      01:23:36.480  And so in fact, if we were to only try and focus on problems of which we know
 
      01:23:42.080  nothing about, this would be breaking the tradition, right?
 
      01:23:45.800  Um, all one generation can do is solve its problems so that it can hand the
 
      01:23:51.560  next generation and incrementally better world, a world that's marginally
 
      01:23:56.320  improved, um, that has slightly less drastic, slightly more interesting
 
      01:24:02.080  problems, um, for them to solve instead of putting a stop to this tradition of
 
      01:24:07.680  progress and instead asking only what can we do to influence the, the very far
 
      01:24:12.520  off. So, um, you know, like, I, again, like it's not that I discount well-being
 
      
      01:24:20.640  It's just that I recognize that the way to actually improve lives, both now and
 
      01:24:26.300  possible lives later is to solve the problems that we can now in order to
 
      01:24:31.400  continue this tradition of incrementally improving everything in each
 
      01:24:36.360  generation, our values, our science, our technology, our art.
 
      01:24:39.960  Um, and so it's just a plea for like how to actually make progress.
 
      01:24:45.200  It has nothing to do with saying fuck you to the next generations.
 
      
      01:24:50.280  I think only focusing on future generation problems would be more of a
 
      
      01:24:56.420  We are not, we did not solve any problems on our own.
 
      01:24:59.980  We were just guessing at what your problems were and trying to solve them
 
      01:25:03.100  as if we knew what they were going to be better than you did.
 
      
      01:25:07.140  Yeah. Like, like there is absolutely no way that someone in Jesus Christ's
 
      01:25:13.300  time would have the ability to solve problems in nuclear physics.
 
      01:25:16.580  And, and yet we are being told that there is the ability to solve problems
 
      
      
      01:25:25.080  So just two points for the free iterating, I think one is that you don't
 
      01:25:28.200  have to throw out the worry about like existential risks from threats
 
      
      
      01:25:33.000  Um, that it continues to be a problem, of course.
 
      01:25:35.880  Um, and it continues to be a problem because it is a problem right now.
 
      
      
      
      01:25:43.240  And of course we need to work to solve that problem.
 
      01:25:44.920  So we need to work continue, uh, nuclear disarmament, for example, and we need
 
      01:25:49.400  to work on containing viruses better in laboratories.
 
      
      01:25:53.940  Like those are real problems to solve right now.
 
      
      01:25:57.540  And, uh, the second point I wanted to make is just if you're answered to, well,
 
      01:26:02.180  if you're in 200 AD, um, of course, you know, predicting what kind of
 
      01:26:06.740  problems you'd have in the year, 2020 is impossible.
 
      01:26:09.500  You can't predict specific problems, but you could work on general things
 
      01:26:13.580  like trying to promote science or, um, reduce religious dogmatism or promote
 
      01:26:19.740  human rights. Um, those are incrementalist responses.
 
      01:26:22.860  So again, you don't need to adopt long-termism to think those are good things
 
      
      01:26:27.020  Those are, those are still the problems we're trying to solve.
 
      
      01:26:29.940  Again, we're, yeah, we're trying to promote human values.
 
      
      01:26:34.040  We're trying to make improve on moral theories.
 
      01:26:36.340  Like those again do not need this belief that the most, the most of our
 
      
      01:26:44.680  Like, you know, it's, you only have to throw it like as this philosophy
 
      
      01:26:51.520  You don't need to throw it that much unless it's, it has a really bad
 
      01:26:55.520  explanation like you were saying, unless it's just the doomsday scenario
 
      
      01:27:00.040  That's like the only thing you have to get rid of right now.
 
      01:27:01.840  Um, and so it shouldn't actually contradict most of your cherished beliefs,
 
      01:27:05.800  but I'm worried about, you know, where this can go in the future.
 
      01:27:09.400  If this, if this advice is like truly, um, taken on board.
 
      
      01:27:13.920  It's, it's like, uh, you know how, uh, like unreal tournament will
 
      
      01:27:20.520  And so the game kind of stays the same, but the whole core underneath it gets upgraded.
 
      01:27:25.400  If you get rid of Bayesian epistemology and you put in
 
      01:27:28.160  paparina epistemology, you get to keep all of the things you care about.
 
      01:27:31.840  It's just way more powerful because the underlying philosophical core is, um,
 
      01:27:38.360  much more, uh, robust, uh, explains much more and it preserves everything you care
 
      01:27:44.640  about, everything that you like, you want to make the world better.
 
      01:27:47.640  You want to donate your time and money into improving the world.
 
      
      
      01:27:52.600  You just swap out this outdated philosophy and you put in a more robust one.
 
      01:27:57.240  It is such a refreshing perspective because all of this confusion that used to be like
 
      01:28:02.920  associated with, with thinking about morality and thinking about science and thinking
 
      
      01:28:10.480  Um, and that isn't to say that you just have this perfect problem for you.
 
      01:28:14.040  Universe is just that you recognize problems and you see them and then you work on them.
 
      
      
      01:28:19.760  You just see, like this is, I just, I see this current, uh, conversation long term as
 
      01:28:25.240  I see this as a problem and like I just want to work on it.
 
      
      01:28:28.640  Um, and it just enables progress and happiness because epistemology and morality
 
      01:28:35.440  and subjective sense of wellbeing all are unified.
 
      01:28:38.400  Um, because it's, it's fun to work to improve the world and make it a better place
 
      
      
      01:28:45.480  And I guess that's why we're both so excited about it too.
 
      
      
      01:28:48.920  Like it is, uh, like the world is a great fucking place and like we have made so much
 
      01:28:54.480  progress and we're going to continue to make progress and it's through
 
      01:28:56.520  conversations like this and through amazing organizations like Give Well and
 
      01:29:01.320  effective altruist and William McCaskill himself is like, like a leader in this
 
      01:29:05.280  field and he should rightly be celebrated and he has an amazing legacy.
 
      
      01:29:10.600  And he just, they just need to replace this old outdated module with a much more
 
      
      01:29:17.600  And I think all of the things that they're working towards will be better.
 
      01:29:21.200  Well, you can even keep your, basically your entire Twitter handle.
 
      01:29:24.400  You just have to remove promoter of long termism.
 
      
      
      
      01:29:30.840  So I don't know what we're going to talk about next, because I feel like we
 
      01:29:33.600  covered a hell of a lot of territory, but hopefully it'll piss a good amount of
 
      01:29:37.960  people off because that means that criticism is working.