00:00:00.000  I'm just I'm just I'm fucking kidding myself. I'm too drunk for this.
 
      00:00:07.000  You published your piece on the EA forum, which generated a lot of good feedback really.
 
      00:00:23.680  Like I got to say, as much as we're taking long-termism to task here, I am so impressed
 
      00:00:28.840  by the response of the EA community to first your piece and then my piece.
 
      00:00:33.120  You know, you can tell people might get emotional about it at times, but all very thoughtful
 
      00:00:38.080  responses on the form of people like really taking the point seriously and really diving
 
      00:00:43.320  into it. Some stuff is obviously misunderstood, but I don't think it's intentional in any
 
      00:00:47.400  way. And so, you know, as I think I said to you offline, like if you're going to disagree
 
      00:00:52.560  vehemently with any community, this seems to be the community to do that with because
 
      00:00:58.160  it's just like full of lovely people who really are just trying to make the world better,
 
      00:01:02.080  who just really try and understand your points and yeah, and genuinely want to understand
 
      00:01:06.600  it's not just like a debate to like try to scare off intruders. It's like they genuinely
 
      00:01:12.280  want to understand kind of the other side and like are encouraging it. Like I think we
 
      00:01:17.480  were just put on the January 2021, like newsletter from the EA community and just like how commendable
 
      00:01:25.720  is this that they will not only engage politely and kindly with opposing views, but actually
 
      00:01:32.720  will like bump it again after all the conversation is quite a Dan, they'll bump it again. And
 
      00:01:38.560  so completely agree. It's been just so amazing to see the warm well.
 
      00:01:43.320  It's just it's literally a community full of maricios who everyone is as sweet as maricio
 
      00:01:48.160  just wants to understand and make the world a better place.
 
      00:01:51.760  So what could be better? Yes. I guess what we thought we would do with this conversation
 
      00:01:55.560  is like super unstructured. I think the plan was to just take a bunch of the common questions
 
      00:02:02.560  and like cut like doing an audio FAQ frequently asked questions session coming up. We're going
 
      00:02:10.200  to do a crossover episode with Finn and Luca from the podcast.
 
      00:02:16.520  Hear this idea. Hear this idea. We're doing a crossover episode with Finn and Luca. And
 
      00:02:23.600  I think we're going to talk about long termism again. And so there'll be one more coming out
 
      00:02:27.040  on this. And then I think we're hopefully going to pivot some. You just wrote a really
 
      00:02:31.920  fascinating piece on cryodynamics, which I think will have to dive into it at some point.
 
      00:02:37.320  You should tell tell the audience the feedback you got on the client dynamics piece as a
 
      00:02:41.920  means of juxtaposing the EA feedback from the client dynamics community.
 
      00:02:48.920  Yeah. So I've slowly ventured into writing, mostly just over Christmas break. I just wrote
 
      00:02:55.080  some pieces about ideas that I just honestly thought were pretty bad ideas. And one of those
 
      00:03:00.160  obviously was the long term as imposed on the EA forum. And another one was a critique
 
      00:03:04.640  of this academic discipline called cryodynamics, which is received a decent amount of attention
 
      00:03:11.640  from sources like the Atlantic and New York Times and whatnot. And anyway, we'll go into
 
      00:03:16.640  depth about exactly what sort of claims it's making in maybe a later episode. But I got
 
      00:03:22.080  my first hate mail because of it. So someone emailed me like 24 hours after the piece came
 
      00:03:27.280  out and just said like I'm anti science. I don't understand what the discipline is trying
 
      00:03:32.960  to do. I'm evil for trying to hold back progress. And if I don't believe in science, I should
 
      00:03:37.720  immediately stop using my cell phone, my computer and everything. Obviously we can't do these
 
      00:03:43.080  podcasts anymore because I'm anti science and I believe in science. And actually threatened
 
      00:03:48.760  to write a blog post smearing me with like dismantling my piece sort of line by line.
 
      00:03:56.080  And actually went so far as to post a little snippet of what this would look like. And
 
      00:04:02.040  these lines involved sort of calling me nasty names and whatnot. So one, yeah, it was very
 
      00:04:08.120  interesting contrast to what the EA feedback was like, which was all very like kind and
 
      00:04:12.680  reasonable and stuff. But was also just a lesson in like what it might look like to be a writer
 
      00:04:18.240  who sort of tries to deal with these kind of topics. Obviously in a big scheme of things,
 
      00:04:23.680  my cryodynamics piece is not even a fly on the wall. But it did give me some insight into
 
      00:04:29.480  what it must be like to be, you know, what of these figures who's like taking flack all
 
      00:04:34.040  the time? Like, I don't know, like if you're like a, like an Amy Chua type of person or
 
      00:04:39.240  like a Paul Bloom or like a Sam Harris, I can't even imagine that kind of flack and emails
 
      00:04:44.480  you must be getting all the time. Like, it made me think that they must have people to
 
      00:04:48.960  almost sort through their emails before they see it. Because I honestly think it would take
 
      00:04:53.120  such a big psychological poll seeing like negative content like that all the time. And
 
      00:04:58.120  I think humans have this tendency to like wait negative feedback as a sort of much more
 
      00:05:03.480  important or rather we like we focus on it much more much more than we see positive feedback.
 
      00:05:07.960  So, you know, after getting this email, I was just like went down a negative thoughts
 
      00:05:12.320  viral for like three hours sort of questioning myself like was I too harsh and stuff and possibly
 
      00:05:17.360  I was which would be good, you know, lessons, lessons to learn. But I guess, you know, you
 
      00:05:21.800  got to you make mistakes by learning and trying things. And so this is my attempt at a ruthless
 
      00:05:27.080  criticism. So possibly it was too harsh. But time will tell, I guess.
 
      00:05:31.160  I remember after we recorded the social media podcasts, I had the same realization about
 
      00:05:38.080  like, like what must Sam Harris's social media experience be? Oh my God. Sure. It's pretty
 
      00:05:43.040  fucking terrible. Sure. People are just shouting at him all of the time. And it gave me a
 
      00:05:47.840  bit of sympathy or empathy or understanding for his perspective why he has a bit of a
 
      00:05:53.160  stronger, I guess, dislike for social media than we do. So I thought what we might do
 
      00:05:59.160  is just take some of these common questions, which we received and try to challenge each
 
      00:06:05.520  other by asking each other kind of the hardest variants of each question that we can find.
 
      00:06:11.200  I'm writing a follow up piece, which is completely in tatters right now. But hopefully a lot of
 
      00:06:18.840  the stuff that we'll say will take written form and then be in a blog post at some point.
 
      00:06:25.440  But right now you, dear listener, will hear a lot of half form thoughts and broken ideas.
 
      00:06:30.920  But what's new? Yeah, but what's new? Okay. So which question should we start with? Let
 
      00:06:38.040  me hit you with one and see how you handle it. So, okay. So here's one. I'm going to read
 
      00:06:43.080  it as as it's written in the Google Docs. But then I actually have a bit of a difficult
 
      00:06:47.400  time understanding it. So I'll read it out and then perhaps you can clarify. So one of
 
      00:06:51.480  the big, I guess, pieces of feedback we got was that long termists don't like literally
 
      00:06:55.520  think we should ignore the short term. It's more that by deciding that most of our moral
 
      00:07:00.600  value lies in the future, it changes our priorities. So instead of our philanthropic portfolio looking
 
      00:07:07.880  like 50%, poverty 30%, animal welfare and 20% improving institutions, it may be 20% poverty
 
      00:07:15.240  alleviation, 15% animal welfare and 65% improving institutions. So I guess the criticism is that
 
      00:07:23.080  we both misunderstood what long termism is and also that the long termist community,
 
      00:07:31.960  long termist philosophy does value the short term. Yeah, so I'm just going to give them
 
      00:07:37.280  the benefit of the doubt that that's exactly what's happening. So say long termists are
 
      00:07:41.560  abiding by that sort of reasoning. When you shuffled your priorities from 25% and 75%
 
      00:07:49.320  from that split to a 50/50 split, there was a reason you were doing that. And to whatever
 
      00:07:53.920  extent your reasoning was based on an absolutely arbitrary number of people in the future and
 
      00:08:00.920  the expected value thereof, that's what I'm criticizing, right? So I'm in this situation,
 
      00:08:06.080  I'd be criticizing the reasoning process they can't be argued with. So I'm fine if you tell me,
 
      00:08:13.200  actually, I'm going to shift my philanthropic portfolio around for such and such reasons. But
 
      00:08:18.800  to the extent that you literally, you can't argue with those reasons, then that's why I have a
 
      00:08:24.240  problem with it. It strikes me as being very analogous to religious debates where you would
 
      00:08:31.040  point to some very literal sentence and say the Bible of the Quran saying you should
 
      00:08:36.720  stone adulterers, choose your favorite biblical verse. And then people say, that's not what people
 
      00:08:44.000  do in reality. That's taking it literally, we're not actually taking it literally in practice,
 
      00:08:49.520  what stoning homosexuality means is more just like be mindful of giving into temptation too often.
 
      00:08:57.360  It's like, okay, well, then they should have fucking written that. And so I have a similar
 
      00:09:02.320  irritation with this line of response, which is like, if the authors didn't mean that you should
 
      00:09:08.800  literally ignore the near term, and they shouldn't have written it, we shouldn't have to have these
 
      00:09:14.880  tea leaves ceremonies where we try to figure out what the true interpretation of the text is,
 
      00:09:18.960  when what is being said is quite clear and understandable. And it was quite bizarre,
 
      00:09:25.520  because on the EA Forum, one commenter said like, you know, at the end of the day, I think really
 
      00:09:34.160  what long termism is going to look like in practice is just like encourage each generation to do the
 
      00:09:39.760  best it can to make the world a little bit better for the next generation. It's like, okay, if that's
 
      00:09:45.040  what it's going to look like in practice, then what you're saying is you acknowledge that is a
 
      00:09:48.800  superior moral principle, then just base your morality around that principle, start with that
 
      00:09:54.480  principle, recognize that that's the principle you want to get to, rather than doing this,
 
      00:09:59.360  this is a very difficult way to get there. And it is, there's a lot of religious parallels, I must
 
      00:10:05.280  say, all my like atheist training of arguing against believers of different forms is all
 
      00:10:12.240  coming into practice here, where there's all sorts of like squirming that's being done to
 
      00:10:18.160  try to see between the lines and in the spaces between lines to interpret something very different
 
      00:10:24.240  than what's actually being said. Yeah, that's a great point. I think I think that's it's worth
 
      00:10:28.720  restating that our regional critique of long termism was specifically geared towards strong
 
      00:10:34.880  long termism. I think we were both pretty explicit about that. And like the critique was based on
 
      00:10:39.440  the paper that was arguing in favor of strong long termism. And so of course, there's a bit of a
 
      00:10:44.960  language game because I didn't always use the phrase strong long termism in my piece, just because
 
      00:10:51.200  it's too long to type every time and would revert to long termism. But we were arguing
 
      00:10:56.720  it's strong long termism and they make very explicit claims about ignoring the effect of
 
      00:11:00.320  the present. So that's fine if you disavow those claims. But then you're already like,
 
      00:11:07.200  you're already sort of seeding the territory, you're seeding the argument a little bit, right? Like
 
      00:11:11.360  if we can all agree that strong long termism was a bad idea, and then we shouldn't ignore the
 
      00:11:16.320  effects of the present, then great, I'm happy. I feel like that battle won. But the other point
 
      00:11:23.840  there is that if long termism doesn't mean focusing strictly on the long term future, then
 
      00:11:32.640  I'm not sure what it means, because it doesn't seem it doesn't seem to add anything as a philosophy.
 
      00:11:38.320  So let's say long termism means to mostly focus on the present, then whatever the mostly whatever
 
      00:11:44.880  work the mostly is doing in that sentence can just be done by any other basically any other
 
      00:11:50.000  morality. It says like we should be good people now and try and solve problems now. And then whatever's
 
      00:11:55.760  left over from the mostly, the one minus the mostly, if you will, a bit says that we should just focus
 
      00:12:01.600  on the long term future, then that's what I'm criticizing. So I'm criticizing whatever percentage,
 
      00:12:06.800  if you will, of the morality that says we should ignore the present in and try and solve unknowable
 
      00:12:12.960  problems in the future. And if that percentage is zero, then I don't know what long termism is doing
 
      00:12:19.520  as a philosophy doesn't seem to be adding anything. So this is kind of what I mean. It's like,
 
      00:12:23.440  there's there's there are ways to get out of it, which dampen the disagreement I have with it.
 
      00:12:28.320  But it seems like at the end of the day, if you're you are going to focus on present problems,
 
      00:12:32.720  then I don't know what long termism is doing for you as a philosophy. Like it's fine to say,
 
      00:12:36.640  well, I really value future generations. But if that's not action guiding in any sense, then I
 
      00:12:43.600  I don't really know what it means to say that. Do you want to take a second to say exactly what
 
      00:12:48.240  you mean by Bayesian epistemology? Because I feel like we both been throwing that term around a lot.
 
      00:12:52.640  Totally. So there's two words. There's Bayesian and then there's epistemology. So it's like the
 
      00:12:56.720  second word first. So epistemology is the study of knowledge and how knowledge is produced.
 
      00:13:01.680  And this is Popper's domain. And this is the domain of science. So knowledge, it's the thing
 
      00:13:09.440  which allows the listener to hear our voices right now. The thing that gives us dominion over our
 
      00:13:16.240  environment. Pistemology is a very, very real subject. It's not this like bizarre, arcane subject
 
      00:13:26.800  that is only for philosophy seminars. It's about understanding what is it that allows us to
 
      00:13:31.280  communicate remotely, for example. Bayesian is a particular interpretation of the probability
 
      00:13:37.920  calculus. The probability calculus is a set of symbols, which it's a small set of operations,
 
      00:13:44.320  which are very useful in describing a bunch of things. We've talked about this a bunch before,
 
      00:13:48.640  so I won't repeat myself there. But when I say Bayesian epistemology, what I mean is that this is a
 
      00:13:54.880  philosophy that says knowledge of things like computers and remote communication.
 
      00:14:00.960  Spring forth out of the probability calculus. Like you just write the symbols in the right way
 
      00:14:07.200  and out will come the periodic table. Out will come statements of cosmology. Obviously there's
 
      00:14:12.960  more to it than that. This then takes us into the realm of induction and it takes us. So there's
 
      00:14:18.800  not just empty statement that probability equals knowledge. There's a set of assumptions that
 
      00:14:25.600  lead you there. But the reason why this is relevant in the subject of long-termism is because
 
      00:14:31.520  when people talk about expected value, they're saying that we can reach one billion years into
 
      00:14:38.000  the future and grab some knowledge about the consequence of our action. So we will know
 
      00:14:45.200  through the expected value what action we should take now. We will gain knowledge about our course
 
      00:14:52.720  of action. And this is the thing which I claim is, let's say, problematic. Nice. Yeah, I think
 
      00:14:59.360  about it as set of rules by which your beliefs must be governed and your actions must be taken.
 
      00:15:08.400  So it's a set of beliefs that just happens to be mathematical in nature. So it tells you how given
 
      00:15:15.040  a set of ideas that you've come across are given a set of evidence how you must interpret them,
 
      00:15:22.960  what you must believe if you are a perfectly rational agent. So we're trying to bound
 
      00:15:28.320  rationality within with the tools of mathematics in a specific way. And the evolution of your ideas
 
      00:15:38.800  and your beliefs over time must be governed by strict mathematical laws if they are to be correct,
 
      00:15:45.440  where it defines correctness in a very precise mathematical way. And it sort of
 
      00:15:50.480  smuggles in as I think Baden will go into in your piece. It sort of smuggles in these certain
 
      00:15:59.120  assumptions that your belief must be characterized by a specific number between zero and one.
 
      00:16:05.120  And we have to, as rational agents, characterize our belief in this sort of very precise mathematical
 
      00:16:12.400  way. And then as you go into it, there's all sorts of more problems because it's based on
 
      00:16:18.960  induction as a philosophy of science and whatnot. But at a base level, it sort of guides what you
 
      00:16:26.240  must believe at any moment faced with the evidence. And sort of in this way ignores the role that
 
      00:16:31.120  creativity and bold conjecture play in idea creation, which is where I think it's more
 
      00:16:37.120  pernicious aspects entering the picture. It's very interesting. So something I've been
 
      00:16:42.800  thinking about a little bit recently, like in the pursuit of say AI or AGI, it makes sense to
 
      00:16:50.800  want to formalize a mathematical framework to say codified beliefs that totally makes sense to me.
 
      00:17:00.560  If you're a scientist and you want to make data, like data from Star Trek, just to make something
 
      00:17:06.080  visual, you want to make a smart robot, you need to come up with a system of rules that govern
 
      00:17:11.760  belief, because that's how algorithms work. It's the system of rules. Fine, I get that.
 
      00:17:16.160  E.T. Janes in his textbook, from which a lot of this comes, motivates it by talking about a robot.
 
      00:17:23.760  So how would a robot process evidence? And I think Bayesian epistemologies is quite useful when you're
 
      00:17:29.600  say programming a Roomba, or when you're developing little applications where robots need to navigate
 
      00:17:37.120  through difficult terrain, totally. But what I've seen on the forum, which is really interesting,
 
      00:17:42.640  is that these rules, they kind of reflect back onto the person who's actually writing them.
 
      00:17:49.120  And then people think that because they've developed these rules, they too must think in
 
      00:17:54.160  accordance with them. And that's not only thought policing, it's like thought laws. It's like you
 
      00:18:01.600  know the phrase "thought police" from Orwell and how it's terrible to govern one another's
 
      00:18:07.200  creativity through policing speech and policing thinking. But it gets even worse when you say,
 
      00:18:14.080  "No, your thoughts have to be confined into this axiomatic formalism that if you deviate from it,
 
      00:18:23.600  then you're irrational." And this is not true. This is not true. This is based on a circular
 
      00:18:27.600  definition of what rationality is. Because rationality is defined via the same theorem. So
 
      00:18:34.560  Cox's Theorems in particular, which says that a rational agent is one who follows Cox's theorem.
 
      00:18:40.720  And so I just want to emphasize this distinction to the little, to say, those who may be very
 
      00:18:48.640  sympathetic to this way of thinking. And that is that it makes sense if we want to program
 
      00:18:55.680  unintelligent robots. But there is no reason we need to superimpose this set of rules onto
 
      00:19:02.320  ourselves. And this is something which I'm beginning to be more and more worried about,
 
      00:19:06.640  because it is essentially authoritarian in nature. It is a desire to have some thing tell you what to
 
      00:19:18.320  think. And I came up with this little thought experiment, which I am horrified to find that
 
      00:19:26.320  people agree with. And so many people said that, like, listen, I get your criticisms of the
 
      00:19:36.000  expected value calculus. But there's so much uncertainty that, like, what else can we do?
 
      00:19:42.000  We're stuck here. Like, we have to make decisions. And this is the best tool we have. And yeah,
 
      00:19:47.440  I get that it's imperfect. I get that you can't compute it in practice. But like, we need to
 
      00:19:52.960  make decisions. And if we don't have this framework, then like, we're stranded. This is the, I think,
 
      00:19:58.640  common sentiment expressed by people on the farm. And so the, I claim that this is, in essence,
 
      00:20:05.680  an authoritarian request. And my thought experiment is to say, okay, to those who think this.
 
      00:20:14.960  Let's imagine you get like everything you could ever want. Let's imagine that tomorrow,
 
      00:20:20.560  somebody solves all of the paradoxes and all of the problems, which plague the expected value
 
      00:20:27.840  calculus and that there's a perfect decision rule. And then let's go a bit further and let's say
 
      00:20:32.640  that the perfect decision rule is like computable in practice. And it's like physically instantiated
 
      00:20:38.320  in some like device. And the device is going to be like connected to you, you're going to be
 
      00:20:45.920  connected to the device. And it's going to measure the amount of evidence you receive through your
 
      00:20:49.920  senses at every given moment. And it's going to measure your brain state to perfectly calibrate
 
      00:20:55.520  the exact amount of uncertainty that you have. So that at every point in time, it will give you a
 
      00:21:00.640  perfect decision. So now, like you have everything you've ever asked for. And then you can imagine
 
      00:21:05.680  like connecting this thing to your ear and having this device, which would probably have to be
 
      00:21:11.920  instantiated like a super intelligence for it to process all the data, just tell you at five
 
      00:21:16.720  second intervals what to do. You've now made yourself a slave. You've made yourself a slave
 
      00:21:22.640  because the one thing that you won't have to do is think critically about anything that you hear
 
      00:21:28.000  because the device is perfect. Right. You know that no matter what it says, no matter how much it,
 
      00:21:33.920  say would disagree with your moral intuitions. The one thing you don't have to do is think
 
      00:21:38.560  critically about anything you hear. You just have to do it. So the desire for perfection, the desire
 
      00:21:45.360  for some oracle to tell you how to live, to tell you what to do is the desire to live like a slave.
 
      00:21:53.040  I really do believe this. And so I've proposed this experiment to two people now, who are sympathetic
 
      00:22:00.080  to this view. And both of them said that it would get the device and they would just do everything
 
      00:22:02.880  that they were at that I told. Really fascinating. Yeah, two people. Yeah, yeah, both of them said
 
      00:22:07.520  yes. And this is so troubling because like entailed in the paparion fallibleism claim that all
 
      00:22:18.080  human beings are fallible is that at no point can we ever stop thinking about what we hear because
 
      00:22:23.920  it is only up to us to determine if what we're told makes sense. There is no source of truth or
 
      00:22:31.600  source of knowledge. And any offer for truth, any promise that says if you just do this one thing,
 
      00:22:40.320  you will get closer to truth is an authoritarian promise. It's the offer of the warm embrace of
 
      00:22:48.320  Big Brother. And like why does Orwell call him call his totalitarian government Big Brother? It's
 
      00:22:53.680  because what could be more comforting than totalitarianism than an authority figure. We all
 
      00:22:59.920  have this latent inside of us. The other thing is that with paparion epistemology, epistemology,
 
      00:23:06.800  morality is deeply tied. You get a moral theory baked into the epistemology and the moral theory
 
      00:23:12.480  is the following. When you realize that every human being is every person is equal in their infinite
 
      00:23:19.040  ignorance, when you realize that ideas can come from anywhere, this encourages you to treat each
 
      00:23:25.040  human being with equality and dignity because you don't know where the good ideas are going to
 
      00:23:29.520  come from. And so the injunction to treat one another with kindness and with compassion
 
      00:23:36.000  follows purely from the realization that each human being is both fallible but also capable of
 
      00:23:43.200  making sense. And anything that promises perfection, I think will just lead to utopianism, authoritarianism,
 
      00:23:53.280  and calamity. And so along winded way of saying that I think that the answer to the question,
 
      00:24:02.640  what do we do without this decision rule? What do we do when you criticize expected values and
 
      00:24:10.560  what other choice do we have? The choice is to think for yourself and to give up the search for
 
      00:24:16.000  some perfect oracle which will tell you how to live and you have to take on the responsibility
 
      00:24:20.960  yourself. Maybe, yeah, can you clarify something for me with a thought experiment? This might take
 
      00:24:25.600  us too far afield but we can cut it if not. So say at any given moment you stop, you've, this device
 
      00:24:33.920  has measured everything, all sensory input, all available evidence. You have a set of beliefs in
 
      00:24:41.440  front of you. And now you have to choose whether you make a cup of coffee or you make a cup of tea.
 
      00:24:49.440  Okay, so in reality, I think what's happening is you run both those simulations in your head,
 
      00:24:57.920  simulations being a loose word here, and you criticize them, you say, well, coffee is going to
 
      00:25:03.840  keep me up late and so maybe I won't have that and I could have decaf tea or you know, I don't really
 
      00:25:07.840  like the taste of peppermint or whatever. I think you just I don't want them. What is going on in the
 
      00:25:15.280  when you have this perfect device? You know, like, I still don't understand this about
 
      00:25:20.400  basingism. How does it? So I mean, I, okay, I guess what it says is we based on the available
 
      00:25:28.960  evidence, we calculate the expected value of having coffee and anti based on prior data. And
 
      00:25:37.200  then we have whichever one leads to the most, the biggest expected value. And then but in in so doing,
 
      00:25:44.560  presumably you would never gain more information about the option that has the
 
      00:25:49.280  the lesser amount of expected value. Like you would never say it decides that you're going to have
 
      00:25:54.160  coffee, then you would never have tea again. Well, I guess the idea would be that in that particular
 
      00:25:59.520  instance, the device, so there's a component missing to my thought experiment, you're
 
      00:26:03.520  highlighting, which is the device would not only have to measure your evidence you've received
 
      00:26:09.280  and your subjective belief state, but it would also have to forecast into the future
 
      00:26:14.240  till the end of time, the ripple effect of your consequences. And so it would be the consequence
 
      00:26:20.400  of you drinking coffee on January 4th at 830 AM in the year 2021. That's going to have some
 
      00:26:31.040  consequence. And presumably it's going to be negligible as with the drinking tea. But there'll
 
      00:26:35.280  be some infinitesimal reason why the utility with drinking coffee would be better. And then you
 
      00:26:41.840  would be told you wouldn't think for yourself, you'd be told to do that. And so the component,
 
      00:26:48.480  which was missing from my omniscient device thought experiment is just that it's about the
 
      00:26:55.120  ripple effect into the future forever. And that's what the device would be somehow measuring. But
 
      00:27:00.560  just imagine you have a perfect device, imagine you have a personal God sitting on your shoulder.
 
      00:27:04.640  And the God just tells you what to do at every single instance. This sounds like abject indentured
 
      00:27:11.840  servitude. And so I pose this because I've spent a lot of time arguing that this is the futile
 
      00:27:22.160  goal when it comes to discussing human beings, like for robots, that makes sense. I understand
 
      00:27:28.480  for robots, but for human beings. But not only is it futile, but why would you want this? I don't
 
      00:27:33.520  understand why people would want this either. And the fact that it's imperfect, I think hides
 
      00:27:39.200  just how horrifying perfection would be if they got all of their dreams met. And they could just
 
      00:27:46.480  perfectly compute all this. I don't know why they would want it.
 
      00:27:48.960  Yeah, fascinating. I just want to read. So there's a post on Les Ron called what is Bayesianism.
 
      00:27:56.560  And it's just explaining what Bayesianism is. And the core tenant three, and this is just going
 
      00:28:00.720  to highlight your point, reasons follows. We can use the concept of probability to measure our
 
      00:28:05.680  subjective belief in something. Furthermore, we can apply the mathematical laws regarding
 
      00:28:10.560  probability to choosing between these different beliefs. If we want our beliefs to be correct,
 
      00:28:16.480  we must do so. So it's it's not even hiding the authoritarian language here, right? It's telling
 
      00:28:22.240  you exactly what you have to do to be considered a rational agent. Otherwise, good luck to you.
 
      00:28:30.080  You cast aside rationality and you will enter the den of the wolves. And so it's yeah, it's just
 
      00:28:39.280  it's fascinating to me. It also seems like sort of self-defeating because I can't, you know,
 
      00:28:45.680  given whatever available evidence you have, there seems to be no way to come up with actual new ideas.
 
      00:28:53.680  Right. So whatever evidence you have available at a certain point will point towards something
 
      00:28:59.840  that you've already seen or something that is implied by the available data. It will never
 
      00:29:05.040  point towards something that you've just conjured up out of thin air that's creative. Get, you know,
 
      00:29:10.720  up until 1904, no amount of evidence could have pointed towards special relativity that Einstein
 
      00:29:17.520  developed in 1905. So anyway, I just I still don't really understand how this epistemology actually
 
      00:29:23.600  works in practice. But I think it's partially that it doesn't work in practice. And people
 
      00:29:29.600  spend all of their time arguing about how they could make it work in practice. And like there's
 
      00:29:35.680  just such an irony given that like you can paralyze an entire message board by just asking them to
 
      00:29:42.240  discuss what's the perfect decision rule. And now no one can decide to do anything else in their
 
      00:29:47.360  day besides argue about probability for the last for the entire, tired, like do you think that this
 
      00:29:53.920  is making the EA community make more and better decisions? I think it's not. I think it's just
 
      00:30:02.160  making them all argue about the probability calculus all the time, rather than actually deciding how to
 
      00:30:08.160  improve the vaccine transportation, which is something that would be very
 
      00:30:13.840  very useful. Very useful. Yeah. Let me ask you a question now. So I think there's a couple
 
      00:30:20.400  responses along the lines of, you know, what are the alternatives here? So I get that this is
 
      00:30:27.520  imperfect, but we have to make decisions and you're criticizing, expecting utility as this,
 
      00:30:36.000  this system that purports to be perfect and whatnot. And fine, we recognize that it's not perfect.
 
      00:30:42.320  It's got lots of paradoxes, it's got lots of holes. But what the hell are we supposed to do? Right.
 
      00:30:47.120  So what would Popper, what would Popper do in this situation? He would say the following,
 
      00:30:51.600  he would say, listen, it's not possible to get a perfect decision rule. In the same way that it's
 
      00:30:59.040  not possible to have some perfect oracle that will tell you it's always true. So in the same way
 
      00:31:04.320  that we stop thinking about what's always true, and we start thinking about how do we detect error,
 
      00:31:09.040  we should do the same with decisions. So we should stop asking, how do you make a perfect decision?
 
      00:31:14.160  We should start asking, how do you design systems that are resilient to bad decisions?
 
      00:31:18.880  The entire focus of the EA community, I think should be inverted. If you care about decision
 
      00:31:23.040  making, you shouldn't be seeking a perfect decision maker, either via mathematics or via
 
      00:31:30.960  some human being who you determined to be infallible, who will make all your decisions for you,
 
      00:31:36.000  because these are authoritarian in nature. But instead, you should come up with systems that
 
      00:31:40.720  are resilient to bad decisions. And this is Popper's philosophy applied to politics. So I just took
 
      00:31:45.840  his solution to the problem of democracy and just removed the word policy and put in the word
 
      00:31:52.640  decisions. So in democracy, he says, it's not about coming up with the perfect leader, because that
 
      00:31:59.200  is again a recipe for authoritarianism and totalitarianism. Even if that perfect leader is
 
      00:32:06.080  something vague and nebulous like the people or the working class, or it doesn't matter who you
 
      00:32:13.280  slot in there, as soon as you expect perfection and infallibility, you will get authoritarianism
 
      00:32:21.040  and despotism. So instead of asking that question, ask, how can you be resilient to bad leaders?
 
      00:32:27.600  And the only other component you need with democracy is that you want to be able to
 
      00:32:33.120  make mass decisions at scale via voting mechanism. But in the EA space,
 
      00:32:38.640  they should be thinking about, they should be thinking, he says authoritarian.
 
      00:32:45.680  Let me tell you how to think. Yeah, let me rephrase that. I believe it would be a more fruitful
 
      00:32:52.560  approach to try to come up with mechanisms to be resilient to bad decisions. And what would
 
      00:32:58.560  that look like? That would be ad hoc. So there wouldn't be a perfect mechanism in every scenario,
 
      00:33:04.400  but there would be certain solutions to certain domains. So in the domain of say, vaccine delivery,
 
      00:33:11.440  perhaps it would be about how do you transport vaccines in such a way that if the refrigerator
 
      00:33:19.520  gets unplugged, it won't spoil. That would be one way to think about it. Or perhaps in the case of
 
      00:33:25.200  malaria bed nets, how do you distribute bed nets, knowing that there's going to be some hubs,
 
      00:33:32.080  some distribution hubs, which fail on you. So focusing on the failure points, focusing on the
 
      00:33:37.680  fact that people are going to try to decide to do all sorts of things. And a lot of these
 
      00:33:41.760  decisions are going to be bad. And how do we best improve the floor of human well-being,
 
      00:33:47.760  given that we know bad decisions are going to continuously be made? If you care about decision
 
      00:33:53.840  making, if you care about uncertainty and you care about helping people, I think this would be a
 
      00:33:57.280  much more fruitful approach than trying to find some perfection, some ideal, which will never
 
      00:34:04.560  lead you astray. It's just not applied to EA, but just applied to COVID space right now. I think this
 
      00:34:10.080  is super relevant because in the states, what we have is certain vaccines actually expiring
 
      00:34:16.560  before states can decide who the fuck they go to. So there's all these conversations about
 
      00:34:23.600  equity and exactly who should be prioritized. And while we're having these discussions trying to make
 
      00:34:29.600  the perfect decision rule, what's happening is we have vaccines in the back room, not being
 
      00:34:33.680  distributed to anybody. And instead, just expiring, it's growing bad. It's wild. And it's like,
 
      00:34:39.840  this is the problem, right? So we should be there should be ad hoc solutions. We should recognize
 
      00:34:45.120  that, of course, no distribution scheme is going to be perfectly equitable, whatever that means.
 
      00:34:51.200  It's not going to be perfect. What we need to do is prioritize getting as many vaccines as
 
      00:34:55.200  the hands as many people as possible. Simple schemes are going to work best here. Let's move,
 
      00:35:00.400  let's go. Instead, we have politicians trying to save face. We have activists yelling at those
 
      00:35:05.440  politicians about exactly what sort of desiderata they need the distribution scheme to make. And
 
      00:35:10.560  this is paralyzing public conversations completely. And interestingly, you know, to their credit,
 
      00:35:16.640  like EA has been pretty vocal about this, or at least certain members of EA have seen have been
 
      00:35:21.280  very vocal about this. Like, obviously, this is a very bad move. We just need to get these
 
      00:35:26.800  vaccines into people's hands. But I just think it's such a perfect example of people looking for
 
      00:35:31.360  perfection. And we really just need to be focused on minimizing errors, but acting
 
      00:35:35.680  with what information we have, right? Even and recognizing that that's going to be imperfect,
 
      00:35:40.560  recognizing that there's going to be problems, and just saying, yes, they're what we problems will
 
      00:35:44.640  solve those as they arise. So anyway, what I'm saying is that this philosophy has massive practical
 
      00:35:50.640  consequences. And this is killing people right now. So not recognizing that there's going to be
 
      00:35:56.000  problems, whatever decision rule you adopt, that no decision rule is going to be perfect.
 
      00:36:02.000  Which in this case, typically means something around equality or equitability or whatever,
 
      00:36:06.800  is literally killing people. That is such a it didn't occur to me that it's like, it's the search
 
      00:36:13.360  for perfection, which is the thing that continuously hurts us as a society, and as human beings, because
 
      00:36:19.760  perfection is impossible. But the search for perfection will lead to gridlock and stalemate,
 
      00:36:27.760  because now it's Ben's perfection versus Vayne's perfection. And we're arguing about which perfection
 
      00:36:33.360  is better. And same with democracy. So we talked about democracy only in pieces. We haven't done
 
      00:36:39.840  a full episode on it. But this is the thing that paralyzed arguments about proportional representation
 
      00:36:45.200  for ages, because now it's like, what's the perfect way to have equal representation of voters?
 
      00:36:51.360  And then you just get 5000 different variations of voting mechanisms, none of which will be any
 
      00:36:58.240  better than any other ones. And you just get this fucking stalemate, especially in that case,
 
      00:37:02.240  because it's impossible to have a perfect voting. Yeah, exactly. So that doesn't help things.
 
      00:37:06.560  So an interesting thing is that arrows theorem, which is what you're referring to, that you can't
 
      00:37:12.560  have a perfect representative democracy, actually serves as a disprove that you can have a perfect
 
      00:37:17.680  decision making apparatus as well, because in as just reading this before we came on, in Deutsches
 
      00:37:24.480  beginning of infinity, the chapter on choices, he just does, I forgot the proof technique where he
 
      00:37:29.680  just do like a one to one correspondence between one problem and another problem. But if you take
 
      00:37:36.160  the mind of a single rational agent, and you view that as say a democracy, then you take all the
 
      00:37:44.560  people in the democracy voting for a decision, and you map that on to individual pieces of evidence,
 
      00:37:50.160  as the Bayesians like to say, then arrow theorem applies here to you can't have a rational and
 
      00:37:57.840  consistent decision maker, because for the same reason that you can't have a consistent set of
 
      00:38:05.760  set of voters. Yeah, otherwise you'd have a perfect voting rule. I can't believe we haven't talked
 
      00:38:08.960  about that before, actually, that's such a good fucking good little refutation right there. Yeah,
 
      00:38:13.600  yeah, that's great. This is chapter eight, by the way, in just beginning infinity, if anyone wants
 
      00:38:18.720  to check it out. But the search for perfection isn't attainable, but nor is it desirable,
 
      00:38:23.600  is the point here. Yeah, even if we got it, we wouldn't want it. And so inverting the question,
 
      00:38:30.880  and to those who say, like, what do we do given all this uncertainty, given that we have to make
 
      00:38:36.480  decisions, my answer is we've talked about the decision thing, but we should talk about the
 
      00:38:40.480  uncertainty thing, like a certainty, it's the same as perfection, like, why do you want certainty?
 
      00:38:46.240  What is so good about certainty? This is the other thing which I don't understand, like people are
 
      00:38:51.680  so worried about how they deal with uncertainty, and it's like, well, let's imagine you get everything
 
      00:38:55.520  that you ever want. certainty isn't valuable, because any raving lunatic chanting on the side of the
 
      00:39:03.680  street is very certain about what they're saying, because certainty isn't the same thing as truth.
 
      00:39:08.480  Right. Yeah, so I mean, the search for perfection and certainty, while you're pointing out is futile,
 
      00:39:15.280  is also quite dangerous, because I think as Popper points out in conjectures and refutation
 
      00:39:20.320  somewhere, it also, as soon as you adopt this sort of frame of mind that you've found the
 
      00:39:25.440  perfection or you found utopia, this justifies violence and against other people who do not think
 
      00:39:32.480  the same way, right? Because you've now found the perfect end. And if you were utterly convinced
 
      00:39:39.120  that your ends are a reach a state of utopia, utopia, or are perfect, then of course, you can justify
 
      00:39:49.600  any sort of means you need to get other people to agree with you, right? You can justify violence.
 
      00:39:55.680  And it also completely destroys the means of actually correcting your mistakes,
 
      00:40:00.800  because now you know what the end goal is, right? So all problems you encounter along the way
 
      00:40:06.240  have to be corrected in a way that's consistent with your end goals. So there's no more room for
 
      00:40:12.640  actually realizing you're wrong and correcting the mistakes that you encounter along the way,
 
      00:40:15.840  they must be solved in one particular way, and that particular way has to be consistent with your
 
      00:40:20.800  end game. So anyway, I just wanted to add that caveat that it's not even necessarily that it's
 
      00:40:27.600  that these things are just futile, it's like actively dangerous to believe you've reached this
 
      00:40:32.640  state of perfect. I think that is why we are both so worried about long-termism, because this is not
 
      00:40:38.240  just some trivial philosophy, it is governing millions and millions of dollars. But the second
 
      00:40:44.880  the second half of the question I think is, okay, let's say people get that we have to make systems
 
      00:40:52.160  that are resilient to bad decisions rather than trying to find a way to make perfect decisions,
 
      00:40:57.360  but there's still this question of uncertainty. How do we deal with all the uncertainty? Because
 
      00:41:01.920  there will still be uncertainty. And to people asking that question, I ask why is uncertainty your
 
      00:41:09.120  goal? I think that this obsession with uncertainty quantification is actually just an artifact of
 
      00:41:16.080  the fact that people had probabilistic tools, and it's easy to talk about probabilistic tools
 
      00:41:22.080  using the language of uncertainty. But who wants certainty is my first question. The answer,
 
      00:41:26.400  they would say, is no, no, we don't actually care about certainty. What we care about is truth,
 
      00:41:31.600  and certainty is a proxy for that. So the assumption is that as we get more and more certain,
 
      00:41:39.040  we will start to collect statements which are more and more true, to which I say,
 
      00:41:45.600  if truth is your goal, then use the word knowledge, because truth is my goal too. And the epistemology
 
      00:41:52.800  and knowledge is about the pursuit of truth. You can be very certain about false propositions,
 
      00:42:01.120  and you can be very uncertain about true ones. So the one of Popper's phrases is, there's a moment
 
      00:42:07.040  of uncertainty clings to every statement, true or false. Certainty does not equal truth. And if
 
      00:42:14.160  you care about truth, as you and I do, then you are firmly in the domain of Popper and epistemology,
 
      00:42:20.240  because we have mechanisms to get closer to truth. And if that's what you care about,
 
      00:42:25.280  good, so do I, so do we. And this is why we talk about knowledge so much, because knowledge is
 
      00:42:30.560  about steering towards truth via correcting mistakes, moral truth, scientific truth,
 
      00:42:38.640  mathematical truth, philosophical truth, all of these make sense. But certainty is a belief state.
 
      00:42:47.040  It's not more than that. And so I ask the hypothetical questioner who wonders how they can
 
      00:42:58.000  possibly deal with all this uncertainty. I ask them to reflect on why they want certainty so badly.
 
      00:43:05.040  Nice. Yeah. Do you want to talk about the Pasadena game for a second, or do you want to leave
 
      00:43:09.360  that for your piece? Because I just love this discovery. And it actually, it's because it's
 
      00:43:15.440  very intuitive. Like I'm kind of disappointed in myself that I didn't think of it before, because
 
      00:43:19.280  it seems clear that this is, I know, I mean, we know conditionally convergent series exists. So
 
      00:43:24.000  it's trivial once you are with that goal of mind, it's trivial to construct a game that has that
 
      00:43:29.440  as the end result, right? Or that is the expected utility. But the Pasadena game was the best thing
 
      00:43:34.160  in the world. I, yeah, I'll tell the audience this, but hopefully the piece will come up before the
 
      00:43:41.520  podcast, so that they'll be getting some surprise there. But so, so, what's the story behind
 
      00:43:49.920  my discovering the Pasadena game? So in the piece that I had written previously, I talked about how
 
      00:43:58.560  the expected value of the future is undefined. And then I was like, well, I surely am not the only
 
      00:44:04.720  person to have said this. And it turns out absolutely not. There is a whole body of research talking
 
      00:44:10.240  about this problem. And it's so common that there's a name for it and they're called expectation gaps.
 
      00:44:16.000  And then I discovered something called the Pasadena game, which is,
 
      00:44:19.360  that takes my little like thought experiment and just steps on it. It is like a 10,000 ton gorilla
 
      00:44:28.320  compared to my tiny undefined expectation thought experiment. And so the way it works is you make
 
      00:44:35.280  a gambling game that in expectation produces a conditionally convergent series. And so that's
 
      00:44:40.320  a mathematical jargon term. But all you really need to know about that mathematical jargon term
 
      00:44:44.640  is that it's a weird series. So a series is an infinite sum. So just adding a bunch of
 
      00:44:53.440  things together. And it's constructed in such a way that if you shuffle the order of the stuff
 
      00:44:59.040  that you're adding, it can produce any number you like. So it's converges to a number conditional
 
      00:45:04.960  on the order in which the terms appear. And so this is a weird mathematical thing, which
 
      00:45:11.280  mathematicians have known about for like 200 years. I think Remen was one of the guys who
 
      00:45:16.960  discovered it and named it. So okay, so this is an expectation gap because the expected value of
 
      00:45:24.640  this game is undefined because you can rearrange the terms to be whatever. Okay, so if that's all
 
      00:45:31.920  that the Pasadena game was all about, then it wouldn't be that big of a deal. But the thing that
 
      00:45:37.200  makes it a big deal is that once you have an expectation gap, you can use it to infect
 
      00:45:42.080  everything else you might care about. And this is called the contagion problem. And so this was,
 
      00:45:48.480  I think, named by the philosopher of mathematics, Mark Colvin, who is quickly becoming one of my
 
      00:45:56.080  favorite new scholars. And I hope to get up on the podcast because he's great. But what's the
 
      00:46:02.160  contagion problem? So the contagion problem is the following. So we were talking about coffee and tea
 
      00:46:09.040  earlier, right? So it's like, if decision theory can't help you decide between coffee or tea,
 
      00:46:14.400  then decision theory is in serious trouble. So let's say you're wanting to decide between
 
      00:46:20.080  coffee and tea, but then let's say that with the coffee action, you add a tiny probability of playing
 
      00:46:27.200  the Pasadena game after drinking coffee. And you can make that probability as tiny as you like.
 
      00:46:33.520  And in fact, from a Bayesian perspective, there's no reason why you would assign a zero probability
 
      00:46:41.440  to it, because now that you're listener, you know what the Pasadena game is, there is some
 
      00:46:45.840  probability that you're going to play it after drinking coffee, even if that probability is
 
      00:46:50.240  infinitesimal compared to this is just an artifact of Bayesian epistemology, by the way,
 
      00:46:54.960  is you just can't you can't force anything to zero, you can force it to an arbitrarily small
 
      00:46:59.920  number, but nothing will actually go to zero. This is just the laws of arithmetic.
 
      00:47:05.520  Exactly. But once you do that, now the coffee option is itself an expectation gap. So once you
 
      00:47:12.080  add undefined term to something which is defined, both become undefined. And then once you poison
 
      00:47:18.560  the coffee option, you can poison the tea option in the same way. And then you can poison every
 
      00:47:22.880  option you like, just by virtue of the fact that you now know about the Pasadena game.
 
      00:47:29.440  Because once you know about it, there is no argument from within the framework of Bayesian
 
      00:47:33.280  epistemology to assign a zero probability to this thing. And this is a contagion problem.
 
      00:47:38.480  That as soon as you hear about this, now you have to assign some non negligible probability to it.
 
      00:47:43.120  And it poisons every other decision that you want to make. We've just poisoned your mind.
 
      00:47:49.200  It's game over. Sorry. But there is no way out from within the framework. I don't know.
 
      00:47:55.920  So what did you think when you first heard about this? Because I thought it was a more
 
      00:47:59.600  perhaps mathematically eloquent exposition of exactly the comment you made in the last
 
      00:48:05.680  episode where we shouldn't be surprised that these paradoxes are arising. I think just in the
 
      00:48:10.000  last episode, you said you take a small number, you take a big number, you multiply them together,
 
      00:48:14.400  and you get another number. And of course, if you're only reasoning based on mathematics,
 
      00:48:19.600  it's just there are symbols on paper. And so there are, of course, there's ways to make this
 
      00:48:24.880  go incredibly wrong. And so if you're only relying on this single tool to do all your
 
      00:48:29.760  reasoning for you, of course, you're going to run into trouble. And so while this is a particularly
 
      00:48:35.280  well thought out exposition of the problems you're going to encounter, if you only rely on the
 
      00:48:40.800  expected value calculus, it again is just a slightly more sophisticated version of, well, if you
 
      00:48:46.480  have a small number, a new big number, you run into problems. And so it's just yet another example
 
      00:48:51.840  of like, of course, you can't just reason based on numbers. And no actual human being is doing
 
      00:48:57.840  this. Right. So even if you go on the EA forum, it's all about people creatively conjecturing a
 
      00:49:04.560  bunch of ideas, and having them shut down or shot up on the forum, depending on what people
 
      00:49:10.640  think of them, obviously these ideas are not coming based on prior. No one could have predicted these
 
      00:49:16.960  ideas based on the state of the world one second ago. People are creatively coming up with these
 
      00:49:22.960  things and putting them out there for other people to criticize. So people with perhaps without
 
      00:49:28.160  admitting it are actually adopting sort of critical rationalist framework. But it's, you know, if
 
      00:49:34.400  you're only if you're going to claim to only reason based on mathematics, of course, you're
 
      00:49:38.320  going to run into paradoxes of mathematics, which exist as they exist in any closed framework,
 
      00:49:44.320  right? We know this from Girdle. No closed system is is immune to paradox. This is like,
 
      00:49:50.000  we've known this for a long time. And so it's just a yet another example of Girdle's theorem in action.
 
      00:49:56.800  So that's a fascinating point. I hadn't connected that to like the
 
      00:49:59.680  incompleteness theorems that like, this isn't possible. And nor is it desirable.
 
      00:50:06.880  Yeah, exactly. But the something I wanted to pick up on is a third you asked about earlier,
 
      00:50:15.600  like, what are the other options? And so one answer is focus on making systems that are
 
      00:50:21.600  resilient to bad decisions. Fine. But then the question is, how do you do that? Another option,
 
      00:50:27.360  which is related, but at a slightly different level of abstraction is the critical or rationalist
 
      00:50:33.600  approach to making decisions. And this is what I was, I think explaining in a blog post with Mauricio.
 
      00:50:40.000  And this is about making decisions via idea generation and criticism. So you still have to make
 
      00:50:48.880  decisions in your daily life. It's not enough to go to the coffee and the tea and think, okay,
 
      00:50:55.680  how do I make a system which is resilience to me making a bad decision here? You can't decide
 
      00:51:01.200  whether to drink coffee or tea or right now I'm deciding to drink a whistler brewing chestnut ale.
 
      00:51:06.480  So you have to make decisions. And the way that you do that in from the critical rationalist
 
      00:51:11.360  perspective is generate a bunch of ideas and then criticize them. And the criticism
 
      00:51:17.120  will be in roughly proportion in proportion to the importance of the decision. So coffee and tea
 
      00:51:26.800  is essentially a negligible decision. And so you don't think about the infinite consequences of
 
      00:51:35.760  drinking coffee or tea. You just think this doesn't really matter. So I'm just going to quickly choose
 
      00:51:40.160  one more significant decisions such as what career am I going to go into? Or where am I going to
 
      00:51:46.800  donate my money? These are significant decisions that one has to make. And in this case, this is
 
      00:51:54.480  where you have to rely on the intelligence of other people and yourself. And so you come up with some
 
      00:52:01.280  ideas and then you talk about them with people and you see what they have to say. And it's not just
 
      00:52:07.200  conversation without a goal. It's conversation at the service of having them think hard about
 
      00:52:13.280  what you are suggesting and then criticize that idea and create new ones. And so the important
 
      00:52:19.520  thing to keep in mind here is that this is not a mathematical framework, but it's an evolutionary
 
      00:52:25.200  one. And so it isn't that we're just saying this ad hoc. There is theory behind it and the theory
 
      00:52:33.680  is evolutionary. And so the big thing which the mathematical approach fails to, let's say,
 
      00:52:42.320  account for is the ability to create new options to come up with new ideas. And this is where decision
 
      00:52:52.720  making becomes relevant. And this is why Arrow's theorem doesn't apply to human beings. It's because
 
      00:53:00.160  human beings have the ability to generate new choices. And so Arrow's theorem assumes that there's a
 
      00:53:07.200  fixed set of choices and they have to choose between them. And Arrow's theorem shows that
 
      00:53:12.080  that will always lead to irrationality. But as soon as you liberate the choice making framework
 
      00:53:19.440  from choosing between fixed options and you allow it to account for choosing between
 
      00:53:24.000  new options and continuously sprouting new ideas in the evolutionary way, where when we talk about
 
      00:53:31.680  conjecture, that corresponds to the mutation stage. And we talk about criticism that corresponds
 
      00:53:37.120  to the selection stage. And then you get ideas, birth into the world. And some ideas are more fit
 
      00:53:42.560  for the environment. They spread more quickly. And other ideas are less fit and they die out.
 
      00:53:48.320  And fitness does not mean truth. It means ability to propagate all the same evolutionary ideas
 
      00:53:52.960  apply here. It's the study of the medics. This is how human beings actually make decisions.
 
      00:53:59.440  This is how just ask yourself the last big decision you had to make, say about which
 
      00:54:08.080  schools to go to or which career to take. Presumably you talked about it with people. Presumably
 
      00:54:14.880  you brainstormed. Presumably you thought long and hard about what some of the consequences were.
 
      00:54:20.960  I am willing to bet that you did not write a little decision matrix and then fill in utility
 
      00:54:26.160  scores. You brainstormed. This is how human beings do it. And so recognizing that this isn't a
 
      00:54:33.840  willy-nilly or ad hoc or there is theory here, it's just not a theory of a mathematical form.
 
      00:54:40.720  I think it's important to understand when people are looking for some sort of structured way to
 
      00:54:48.240  make decisions. There is some quote structure here, but it's not of a confining sort. It's more of a
 
      00:54:54.240  open exploratory. If that makes sense. I think an important point to add here, I think, is
 
      00:55:01.200  one you've made before. I can't remember where. I don't know if this was in a blog post with Maricio
 
      00:55:06.800  or perhaps a more recent piece, but the idea that criticism is not arbitrary. I think a lot of
 
      00:55:13.520  people would look at this sort of system and say, "Sure." So you're the decision maker and then you
 
      00:55:19.040  subject your purported idea to a bunch of different people and/or yourself. They give you criticism,
 
      00:55:25.200  you criticize the idea, but then at the end of the day, you're sort of deciding which sort of
 
      00:55:29.840  criticism to take on board. It's arbitrary at the end of the day because you're deciding which
 
      00:55:38.240  points of criticism you're handling. We've already said nothing's going to be perfect, so every idea
 
      00:55:43.680  is going to have a bunch of criticism. There's nothing to separate a bad idea from a good idea.
 
      00:55:49.200  So I'm going to have criticism. You have to decide which one to go with anyway, and you could just
 
      00:55:52.080  make that choice any way you like. I think this is not true because criticisms are just ideas,
 
      00:55:59.840  and ideas have certain reach. They have roots to them. They have reasons for adopting these ideas.
 
      00:56:07.120  And these reasons are grounded in rationality. So for example, if you're making an idea,
 
      00:56:16.960  if you're making a decision that's going to arbitrarily impact some group over another group,
 
      00:56:27.040  the criticism that with this arbitrary sort of reasoning, you can justify any sort of bigotry you
 
      00:56:34.720  like, whether it's racism or gender stereotyping, anything like this. That is a very powerful
 
      00:56:43.040  criticism and something that's very hard to ignore. That is not on equal footing with every other
 
      00:56:48.160  sort of criticism. And so criticism applies very real evolutionary pressure, like you were saying,
 
      00:56:59.440  because of the reach and the power of its ideas. And because as agents who are trying to abide
 
      00:57:06.560  by the best reasons possible, we are subject to sort of the power of different arguments here.
 
      00:57:14.560  And so criticism in this way, good criticism will really refute an idea and not allow us to take
 
      00:57:24.240  it on board. Yeah, so I think it's the thing I'm writing now, which is like, to prove to yourself
 
      00:57:31.120  that criticism is non-arbitrary, tell your family you've decided to take up recreational
 
      00:57:34.880  drinking and driving. They are going to criticize that decision, and they're going to hopefully steer
 
      00:57:41.040  you into better decisions. But what exactly are they criticizing? What they're criticizing is
 
      00:57:46.000  typically inconsistency, inconsistency in your decision and say your other goals in life,
 
      00:57:52.640  or inconsistency in contradictions between what you've decided to do, the idea you've put forth.
 
      00:57:58.720  And what we know about human values, what we know about well-being already. And so the criticism,
 
      00:58:06.320  the non-arbitariness of criticism is due to the fact that it's typically about criticizing
 
      00:58:12.320  those ideas which don't go here with other ideas we hold to be true, ideas about well-being,
 
      00:58:18.640  or ideas about science, or are internally inconsistent, or are contradictory to your state of names.
 
      00:58:27.440  Nice, yeah. And in the EA context, Peter Singer actually talks about this in his book,
 
      00:58:33.520  The Expanding Circle, where he talks about the reason why our sort of moral circle has expanded
 
      00:58:39.520  from those in our direct communities, to those in our entire country, to those in the entire world,
 
      00:58:44.240  eventually to animals, is because we recognize the force of these arguments. And as soon as you
 
      00:58:51.200  start drawing arbitrary lines between I'm going to care about people more in my community than I'm
 
      00:58:55.280  going to care about people in Taiwan, say, that is like a very internally inconsistent argument to
 
      00:59:02.640  try and make, and it doesn't stand up to heavy criticism. And so anyway, I just wanted to sort of
 
      00:59:11.120  underline this argument in the moral framework, because moral reasoning is equally subject to
 
      00:59:17.360  this sort of criticism, where arbitrary reasons that justify partiality based on skin color,
 
      00:59:25.840  gender, species, or whatever is subject to the exact same sort of flimsy reasoning that any
 
      00:59:33.840  other sort of ideas are. Okay, one thing I wanted to talk about was basically just, yeah, be nice to
 
      00:59:42.800  whether this is our last thing or not, I don't know, but to just talk about like prediction in
 
      00:59:48.160  general and like reference classes and stuff like that. Okay, yeah, this, I got, I, as much fun as
 
      00:59:58.160  the EA community is fun to engage in a nice back and forth with, I did get a lot of flack for these
 
      01:00:04.480  comments. I'm not sure there was at one point all my comments for my thing were all negative. So
 
      01:00:13.360  some people went through and like down voted all of them. And this was, I guess this right after I
 
      01:00:20.080  posted the joke about Bayesianism versus critical rationalism. So maybe I should have not done that.
 
      01:00:25.600  But there was a question here, which I think is actually pretty insightful into the
 
      01:00:32.560  differences between these two communities. And the question is as follows, it says, I'm curious.
 
      01:00:38.880  My friend has been trying to estimate the likelihood of nuclear war between 2100. It seems like this
 
      01:00:44.480  question that is hard to get data on or to run tests on, I'd be interested to know what you'd
 
      01:00:50.000  recommend them to do. Is there a way I can tell them to approach the question such that it relies
 
      01:00:54.720  less on subjective estimates and more on estimates derived from actual data? Or do you just think
 
      01:01:01.360  they should drop the research question altogether? Because any question that would rely on subjective
 
      01:01:08.480  probability estimates are basically useless. And so I just think this is a nice arena for us to
 
      01:01:15.600  sort of like set up or at least for me to set up my thoughts on like when I think sort of this
 
      01:01:20.160  prediction is useful for the future and how I think about modeling questions. Because obviously we do
 
      01:01:26.800  have to make some sort of predictions or some we have to have models that predict some things
 
      01:01:32.800  about the future under certain conditions. And so let's just like handle this question on nuclear
 
      01:01:39.760  war before 2100 for a second. So my answer here was like, well, first, I'm not going to tell anyone
 
      01:01:47.200  what to do with their time. So whatever you want to do with their time, go ahead, far be it for me
 
      01:01:51.440  to tell you what to do. But let's just recognize first of all that any data that you gather on this
 
      01:01:59.280  question is subject to the constraint that it's it's happened in the past. So let's say, for example,
 
      01:02:06.560  you're trying to estimate the likelihood of us, you're trying to estimate the likelihood of nuclear
 
      01:02:11.760  war before 2100. So maybe you go out and you gather data on the question on like how many nuclear
 
      01:02:18.320  near misses that there've been, how many like all like almost nuclear launches there were in the
 
      01:02:23.760  Cold War, for example, and there are several incidents of this. And then you use this and you say, okay,
 
      01:02:28.880  based on that frequency and the geopolitical tensions at the time and what countries issued
 
      01:02:38.560  the commands, etc, etc, I'm going to forecast that our likelihood before to, you know, die by nuclear
 
      01:02:45.680  war before 2100 is 2% or something. So the question is like, do I think that sort of model is absolutely
 
      01:02:52.560  useless? And should we not rely on it at all? Or like, what is that, you know, what is that telling
 
      01:02:57.600  us like, can we actually make that probabilistic prediction? And my answer here was that if you're
 
      01:03:03.440  going to form a model like that, it's crucial to understand what sort of assumptions you're making.
 
      01:03:09.120  So what you're doing is if you're pulling data from the past, then you're making the, and then if
 
      01:03:15.680  you if you have a model that says based on this data, we're going to predict that the likelihood
 
      01:03:21.840  of nuclear wars is such and such, then the assumptions of that model are that the future looks basically
 
      01:03:27.200  like the past, the geopolitical pensions that were in place when we gathered the data are
 
      01:03:33.840  approximately the geopolitical pensions that will be in place from now until the year 2100,
 
      01:03:41.040  so on and so forth. So there's like a set of assumptions that were met when you were gathering
 
      01:03:45.440  the data. And now you're saying that those set of assumptions will hold between now and 2100.
 
      01:03:52.720  And that's fine if you want to have that model. But we need to be very, very explicit that that
 
      01:03:58.400  model is not actually, it's not answering the question, what is the probability that we're going
 
      01:04:04.320  to be that there's going to be a nuclear war before 2100? It's answering the question,
 
      01:04:08.720  assuming that the world between now and 2100 looks approximately like it does,
 
      01:04:15.600  like it did when I gathered this data, and such and such conditions were met on geopolitical
 
      01:04:22.320  pensions and stuff, then the probability of us going to war is so and so. So
 
      01:04:30.640  this might seem like a weird sort of minor point, but it's just about all these models when you're
 
      01:04:37.440  gathering data have like tons of assumptions about them. So if you're going to forecast things
 
      01:04:42.880  into the future, by necessity, you have to assume that the future looks like the past.
 
      01:04:51.600  So if you're going to forecast behavior into the future, you have to say my model is conditional on
 
      01:04:56.480  the future looking like it did when I gathered the data. And that's fine. You can do that. And then
 
      01:05:03.360  we can just argue about how about those assumptions are. In this case, I would say those assumptions
 
      01:05:08.400  are pretty silly. I would say like the world looks very different now than when the Cold War
 
      01:05:15.120  was in full swing. And it's very likely that before 2100, the world will look significantly
 
      01:05:23.040  different. We're going to have new technologies. We're going to have new political ideas. There's
 
      01:05:27.840  going to be new international relations. There's a whole host of things that we just cannot predict.
 
      01:05:32.080  And so employing that model in practice to say that we know what the probability is of nuclear
 
      01:05:38.720  war is pretty silly. So it's not to say I think these models are always useless, but it's to say that
 
      01:05:45.520  it's the assumptions embedded in models about when you're predicting the future and what
 
      01:05:51.360  reference class you're using, where the data is coming from, that I think are like really
 
      01:05:55.840  important and get sort of just smuggled in when you're saying what is the probability of nuclear
 
      01:06:01.360  war before 2100? Yeah, I have so much to say. So first to the questioner who asked if they're
 
      01:06:08.320  French should stop working on this problem? I think yes, they should. This is a
 
      01:06:13.680  painting of life. This is a waste of time. And this is going to just, yeah, they totally should.
 
      01:06:20.800  And I just want to bump and re-re mention the name Vaslav Smil. Vaslav Smil, I think,
 
      01:06:29.440  should be his book Global Catastrophes and Trends Should Be Read by Everybody Who
 
      01:06:34.720  Cares About Existential Risk, because he refuses to use probability in places where
 
      01:06:41.360  it can't be done. He's very clear about how certain problems just are not quantifiable in this way.
 
      01:06:47.840  And so he doesn't do it. And that isn't to say we can't say intelligent things about nuclear
 
      01:06:54.400  war in the future. We can. We just can't assign numbers to it. And I'll go even further actually,
 
      01:07:00.720  which is that when I hear people say things like the probability of the polar ice cap
 
      01:07:06.480  smelting in 2100 is 65%. I ignore it. My default assumption is that all such probability values
 
      01:07:15.200  are meaningless unless I know exactly where they've come from. So typically when I hear this stuff,
 
      01:07:21.120  I don't take the number at all seriously. It just is my default is that it's meaningless. But that
 
      01:07:27.520  is not to say that it is always meaningless. And so I should explain a little bit about how
 
      01:07:31.200  sometimes these numbers are derived. So a good heuristic is if you think the model is able to be
 
      01:07:40.080  programmed in an arbitrarily complex program, but programmed, and then run over and over and over
 
      01:07:47.520  again, then I start to take these numbers seriously. But that is the rare case. That is the case in
 
      01:07:52.400  certain areas in physics. That's a case in virology, but that is not the case in geopolitics. That's
 
      01:07:57.440  not the case in human history. We could not possibly simulate the future span of human history because
 
      01:08:04.080  we don't know what ideas are going to be generated. And so models are possible climate change models,
 
      01:08:10.560  for example, are based on assumptions. And we can say some things about the effect of carbon
 
      01:08:17.680  dioxide on global climate. We know that we don't know what the effect of the future economy is going
 
      01:08:24.240  to be. We don't know if there's going to be another recession, which is going to cause
 
      01:08:26.880  drops in carbon emissions and stuff. So those numbers I take, I don't put too much weight,
 
      01:08:33.520  but they're somewhere in between, say, particle physics and predicting nuclear strikes and stuff.
 
      01:08:41.600  But just ask yourself, how much knowledge do we have? Could we possibly program this into a
 
      01:08:48.160  computer where you would then be able to run this simulation hundreds of thousands of times?
 
      01:08:52.960  And are the assumptions underlying this reasonable? And in the case of predicting the likelihood of
 
      01:08:58.320  nuclear war, it's impossible. So I would very much say to this person to give it up.
 
      01:09:04.160  Yeah, good thought experiment along these lines comes from Ann Supel,
 
      01:09:07.840  Shadow Tearan, who is the editor of conjecture magazine, actually, which is publishing all my
 
      01:09:13.040  fucking rogue pieces, getting me into trouble by various groups.
 
      01:09:17.200  A nice way to formalize or rather, I guess, just cast in a different light where you're saying
 
      01:09:26.640  about programmability is just ask yourself, like, do you understand the parameters that this model
 
      01:09:31.840  is running on? So in the case of nuclear war to the year 2100, let's try and list out the inputs
 
      01:09:38.960  to this model, right? They were going to write, you know, it's okay. Does Putin see Russia over to
 
      01:09:46.800  the next Russia president? Does Mongolia rise up and develop New York's? You know, there's
 
      01:09:53.920  infinitely many things here that could impact this model. And of course, we can't write a computer
 
      01:09:58.640  program to this, because we don't even know what the fucking input space is, right? Whereas with
 
      01:10:03.440  the virus, it's a very, it's a very, and it's a complex program, of course, but there's a well-defined
 
      01:10:09.840  input space. So you can, you can model, you know, virus transmission rates, how distant people are
 
      01:10:16.080  from each other, how often they come into contact with one another. And so this is why modeling the
 
      01:10:21.920  virus is actually feasible. As computationally intensive as it might be, that's a different
 
      01:10:27.120  point from feasibility, because we can actually write out a lot. And even that is conditional on
 
      01:10:32.800  assumptions. So the assumption is that during the virus spread, there's not going to be a new
 
      01:10:37.440  drop to the Allegheny County, for example, right? So it's conditional on the fact that everybody
 
      01:10:42.720  is going to be going about their normal activity in the same way that they were when the data was
 
      01:10:47.840  collected and the model was programmed. But even there, it's always conditional. And then we just
 
      01:10:54.960  have to ask ourselves, is it reasonable to assume that these conditions are going to hold in the
 
      01:10:59.280  future case as now to the any machine learners out there? This is what the IID assumption is all
 
      01:11:05.680  about, independent and identically distributed basically says the distribution for the test set
 
      01:11:13.520  is going to be the same as the training set. This is a machine learning language for the induction
 
      01:11:18.800  premise, which is that the future will resemble the past. And this is the same as the epistemology
 
      01:11:23.520  language that all knowledge is all predictions are conditional. But we're all saying the same thing
 
      01:11:28.720  using slightly different terminology, but it's not a map between between disciplines.
 
      01:11:33.040  Nice. Yeah. This is actually one thing that I think Popper didn't get right or at least the
 
      01:11:38.640  explicated in sort of an erroneous fashion, which was he drew this distinction between conditional
 
      01:11:45.280  models and unconditional models, where unconditional models were just saying like, something's going
 
      01:11:49.840  to happen. Right. So there's going to be an eclipse and it's going to be viewable from Lebanon,
 
      01:11:57.040  on May 22nd, 2024 or something. But there's, you know, there's no models which are actually
 
      01:12:04.240  unconditional. Of course, everything has assumptions. So even the eclipse, the assumption or the eclipse
 
      01:12:09.200  model rather relies on the assumption that the planet's going to be there. We're not going to blow
 
      01:12:14.240  the planet off or we're not going to develop, develop the technology to steer the planet off.
 
      01:12:19.680  It's like orbital course or the sun's not going to blow up or something. So it's always a matter of
 
      01:12:25.040  just like looking at the assumptions of the model and deciding whether these assumptions are worth,
 
      01:12:30.880  we think these are going to hold. So it comes always down to just like a process of criticizing
 
      01:12:37.200  the assumptions of any model and deciding which models we're going to pay attention to.
 
      
      01:12:44.560  George box George. George box. Someone came up with George box with all models are wrong,
 
      01:12:53.760  but the summer useful. Yeah. Yeah. So to to clarify the proper thing, like, so in some of his writing,
 
      01:13:01.920  I think in logic of scientific discovery or perhaps objective knowledge, he talks about how
 
      01:13:06.640  there are no unconditional predictions are all conditional on background knowledge. Oh, so he does
 
      01:13:12.400  say. Yeah. So of course, of course, proper. So yeah. But like the thing with the eclipse and stuff is
 
      01:13:19.200  yeah, there's there's no perfect unconditional anything. But it's just that if the conditions
 
      01:13:25.200  for this prediction aren't really relevant compared to any other prediction you might make
 
      01:13:30.320  about any other thing. So predicting that it won't be a eclipse and predicting that there will be
 
      01:13:35.200  an eclipse is both going to be conditional on the same background knowledge. Oh, nice.
 
      01:13:39.040  Yeah. So it's like if the thing that you're conditioning on, but apply to every bloody
 
      01:13:42.800  prediction whatsoever, then you can kind of forget about it. But yeah, of course, everything is
 
      01:13:47.440  conditional on something. The other point, so I think somebody in the EA forum pointed this out
 
      01:13:53.760  to the effect of saying like, but there's no such thing as a not conditional prediction.
 
      01:13:59.920  And why couldn't long termists also make all their predictions conditional to conditional on?
 
      01:14:05.600  The sun not exploding tomorrow or what have you. And the answer is, of course, you can,
 
      01:14:13.520  everything's conditional, but it is on incumbent on those who take a scientific mindset to try to
 
      01:14:19.360  make the conditions under which their prediction will hold as precise as possible. So you can always
 
      01:14:26.480  make these vague predictions conditioned on just the future existing fine. But it's like the
 
      01:14:32.960  falsifiability thing like it's always so irritated. But people say like, well, of course, our theory
 
      01:14:38.560  is falsifiable because in a billion years from now, we'll find out what happened and then it'll be
 
      01:14:44.000  falsified. Well, okay, it's actually falsifiability is not a Boolean. It's not falsifiable or not
 
      01:14:50.400  falsifiable. But also, it's not like it should be up to the person proposing the theory to work
 
      01:14:57.920  to make their theory as falsifiable as possible. So someone who says, yes, long termism is falsifiable.
 
      01:15:06.480  You idiot, obviously they say with a smile and hug, but they're still like, like, like you shouldn't
 
      01:15:12.800  accuse long termism of being unfalsifiable because it is misunderstands a couple things first. They
 
      01:15:18.640  acknowledge that falsifiability is valuable. But they don't acknowledge that it is the job of the
 
      01:15:26.560  theorist to make their theory as falsifiable as it possibly can be. So it isn't up to us critics
 
      01:15:35.120  to point out that it's not falsifiable. If you're contributing to the project of human
 
      01:15:40.400  betterment and of theorizing, then if you're going to make a theory, it's up to you to make it more
 
      01:15:45.600  and more falsifiable, try to get it as falsifiable as fucking possible so that it can be falsified.
 
      01:15:52.880  And so falsifiability isn't a binary property. It's more of a continuous thing that you try
 
      01:15:59.760  to strive towards. And so if there's a long, the proponent of long termism who says
 
      01:16:08.240  "long termism is falsifiable," it's like, yes, but we're all going to be dead.
 
      01:16:13.280  I'm more exact falsified. And so how can we help to take the long term as philosophy and change it
 
      01:16:21.760  to become more and more falsifiable so that it can be falsified within a couple years? That
 
      01:16:26.320  would be productive research. This would be something which would genuinely contribute
 
      01:16:33.680  to the project of human betterment, to try to turn long termism into something which is falsifiable
 
      01:16:38.880  within a couple years. Yeah, I ran into this multiple times. Like, I wrote the first section of
 
      01:16:42.800  my piece kept evolving because I first, I was criticizing it for being unfalsifiable.
 
      01:16:47.600  And then people, I guess correctly pointed out, like, no, it is falsifiable, right? We're going to know
 
      01:16:52.560  in a billion years how many people there were, if they were 10 to the 15 or 10 to the 50 or whatever.
 
      01:16:57.440  And yeah, sure, it's falsifiable because we know a lower bound on this is like
 
      01:17:03.520  a thousand people or something. We can definitely say that certain things are wrong.
 
      01:17:08.560  And so this made me just fight, which I hate doing. I just was fighting with the language
 
      01:17:13.600  a bunch to make sure like, okay, you know, but the point is like, for all practical purposes right
 
      01:17:19.520  now, it's impossible to falsify because you can just say, well, there's going to be 10 to the 15
 
      01:17:24.000  people in the future. There's going to be 10 to the 50 people in the future. There's going to be 10
 
      01:17:26.880  to the 100 people in the future. Yeah, I'm just echoing my fucking frustration with this like
 
      01:17:31.200  falsifiability thing because like, fine, it's falsifiable at the end of the world. I mean,
 
      01:17:35.440  know how many people there were, but like, it just does not, it's unhelpful.
 
      01:17:39.040  This is the same thing that occurred with the string theory too, right? Where like,
 
      01:17:42.240  they came up with a theory that had like, I don't know, 10 to the 500 parameters.
 
      01:17:46.000  Right. And so technically fine, it's falsifiable. You can iterate through each and every one of these,
 
      01:17:50.960  but for all practical purposes, it's not falsifiable. And that's the only thing that matters because
 
      01:17:55.040  falsifiability is a practical criterion. It's a criterion that we should be able to do
 
      01:17:59.600  within the next couple of years. And it is something which is incumbent on the theorist to
 
      01:18:05.680  work at. It is not something that opponents should be pointing out. It's something that anyone who
 
      01:18:11.520  takes science seriously should strive towards. But not only just science,
 
      01:18:17.840  like in speech as well, like to try to communicate your ideas in is clear away that your
 
      01:18:24.720  conversation partner can spot holes in it. There are many people who like to talk in this very vague
 
      01:18:31.200  abstract imprecise, airy, fairy, fluffy way that you leave the conversation thinking like,
 
      01:18:36.800  I don't know what the hell they said. I don't know if it's true or if it was false, because
 
      01:18:40.240  I just couldn't really understand it. And that would be like not falsifiable communication.
 
      01:18:45.920  I just changed it to irrefutable. Totally. Just avoid the whole semantic discussion.
 
      01:18:58.560  Oh, so here's one. I thought it was interesting. So another questioner asked like, okay, fine,
 
      01:19:06.880  we get it, work on short term problems. But how do you decide what is a current problem?
 
      01:19:12.080  Oh, nice. What's even the like, there's a lot of current problems and how do you decide what to
 
      01:19:17.680  work on? I think it's a personal decision. I don't think that there should be a formula. I think that
 
      01:19:26.160  whatever you personally find engaging and interesting to yourself, and anything you personally want
 
      01:19:31.440  to work on, I think is what you should be working on. And this is one of the things I dislike about
 
      01:19:37.600  the X risk folk because they say these are the most important problems to work on. And I don't
 
      01:19:43.600  think that anyone can know that, frankly. And I think that which problems you decide to work on,
 
      01:19:49.520  which areas you want to research should be entirely governed by what you find interesting and what you
 
      01:19:57.360  think is urgent at this particular time in your life. I think Deutsch calls it something like
 
      01:20:05.360  the fun criterion, although I haven't totally understood that. And so like, I don't want to
 
      01:20:11.520  tell people what problems they should be working on. I think that that will get us into trouble
 
      01:20:15.360  because like take vegetarianism, for example, in my own life, I haven't made that a big focal point
 
      01:20:21.040  of stuff that I think about, but you have. And that's great. And I would never tell you not to
 
      01:20:26.240  work on that problem and work on a problem that I find more interesting. I think that human beings
 
      01:20:31.520  are idiosyncratic. And we have a wide variety of interests and the things that appeal to you
 
      01:20:35.760  are going to appeal differently to me and differently to other people. And so I think that
 
      01:20:41.520  how you decide which problem to work on should be a personal decision, which you make based on
 
      01:20:50.240  what you find interesting at the time. And that's about as much advice as I would want to give
 
      01:20:55.440  there. It's always a process of criticism. So I think problems arise like any, like the conjecture
 
      01:21:04.720  that something is a problem is like any other conjecture. So someone thinks something is a problem,
 
      01:21:10.400  and then we think about it and we criticize the assumptions, we criticize why we think it might
 
      01:21:14.960  be a problem. And it's like any other theory. And so if the theory stands up to scrutiny, then
 
      01:21:20.960  we adopt it as a problem. And if not, and this is just basically rewriting your point about the
 
      01:21:26.000  assumptions, right? Like, do we think the assumptions of driving a question, driving a model are realistic?
 
      01:21:33.680  And this is what tells us what's a problem. It's always conjectural. It's always, we could always be
 
      01:21:41.200  wrong about what an actual problem is. And yeah, and so in this way, it's just it's like any other,
 
      01:21:48.960  it's like any other idea, any other thought, it's a conjecture in the world. And we continually
 
      01:21:53.680  criticize it and see which one stand up to the most scrutiny. And for me, I think, you know,
 
      01:22:00.160  AGI does not stand out too much scrutiny, but other problems do. And like, talk about Einstein
 
      01:22:07.040  is so useful. And we do it only because like talking about strong long termism, it crystallizes
 
      01:22:11.680  some ideas from which you can kind of radiate out. But Einstein was motivated to work on
 
      01:22:18.480  special relativity because he saw this problem of Maxwell's equations, right, where like he saw
 
      01:22:24.880  that it proved that the speed of light is going to be constant independent of the reference frame
 
      01:22:31.200  in which the person was traveling. And that was what he thought was super interesting to work on.
 
      01:22:36.960  So he wanted to work on that. And fuck yeah, go Einstein. But imagine telling him, no, you know,
 
      01:22:42.000  that seems like a very low impact problem to be working on. Instead, you should be working on
 
      01:22:45.680  fertilizer, which is another big problem that like around that time, or like the, was it the
 
      01:22:51.200  haber Bosch process. And we don't know what problems are going to be incredibly fruitful,
 
      01:22:58.240  and what problems are going to be dead ends. And so in the similar way that I don't want to
 
      01:23:04.240  constrain people's ideas through a set of formulae, I don't want to constrain what people decide to
 
      01:23:11.600  work on based on what you and I deem to be important. And I say this very consciously,
 
      01:23:19.840  knowing that this is the style of the x risk, where they say their problems are so important,
 
      01:23:27.280  they're going to label it existential risks. There's no way anyone can know this. And so
 
      01:23:32.960  if you want to trust Toby Ord and FHI to tell you what the most important problems are,
 
      01:23:37.760  you are welcome to do that. But I think that if you also want to trust your intuition,
 
      01:23:42.880  if you want to trust like what you personally think is fun, if you want to work on figuring out how
 
      01:23:47.600  to, I don't know, produce online mooks. If you want to figure out how to have a friend who's
 
      01:23:55.120  working on the problem of how do you get kids super into Dungeons and Dragons. And that's awesome.
 
      01:24:03.360  That's a cool problem. Work on that. Fuck yeah. Like this whole mantra of like,
 
      01:24:08.560  here are the four or five problems, which are the most important thing to work on. And now,
 
      01:24:12.560  you can go work in some job you hate, as long as you make a lot of money, and then donate it to us
 
      01:24:17.920  so that we can continue working these problems. And then what's the phrase for it?
 
      
      01:24:23.760  This is having people do the thinking on your behalf. And I think it's a terrible idea.
 
      
      01:24:36.400  Now we're taking on more than just long term. It's taking on the entire.
 
      01:24:40.240  I guess I'm taking on, I'm criticizing the idea that there are a small set of problems,
 
      01:24:48.640  which are worth working on. And the rest aren't.
 
      01:24:50.960  I think there is something quite foundational here, I think, which is a criticism along of like
 
      01:24:58.800  utilitarianism, where utilitarianism says you need to maximize whatever action brings about the most
 
      01:25:06.400  good. But this really ignores the role that knowledge creation plays in in achieving in making
 
      01:25:16.480  progress. And so it's again, this sort of passive versus active approach to creating knowledge and
 
      01:25:24.720  helping people. So the passive way would say, look at all the options in front of you and choose that,
 
      01:25:30.320  which has the highest expected value, or is the most likelihood of helping people.
 
      01:25:37.120  But this really ignores the role that like creativity and knowledge plays in
 
      01:25:44.480  generating progress. And so there is like some fundamental pension here that I haven't really
 
      01:25:50.320  been able to sort through, but classical utilitarianism really ignores that sort of the progression of
 
      
      01:26:01.840  This is great. You've got my little cogs turn in now. Like classic utilitarianism or just like the
 
      01:26:09.040  way that the X risk conversation goes is like, here are the fixed set of problems that are
 
      01:26:16.880  valuable to work on. Let's say there's five. And now here's some tools to weigh which of these five
 
      01:26:25.840  you should be working on. It completely ignores in the same way that ignores the idea generation
 
      01:26:32.320  stage, it ignores the problem discovery stage. Like I think that the problem of how do you
 
      01:26:40.320  massively educate children in developing countries is a huge problem. It's a problem which I think
 
      01:26:50.000  might actually be the master problem because it would provide a huge amount of potential
 
      01:26:54.880  knowledge to solve all of the other problems. Yeah, exactly.
 
      01:26:57.680  Because if we could figure out some mechanism to allow children to self-educate via the internet
 
      01:27:04.160  and say India and Africa and Mongolia, then the amount of knowledge and ideas that would be coming
 
      01:27:12.480  out of these places would be just revolutionary. But I wouldn't dream of telling people this is
 
      01:27:18.720  the only problem they should be working on. It's personally a problem I think is very interesting.
 
      01:27:23.120  And I think it's also in fact one that could go a long way into solving all the X risk problems,
 
      01:27:30.640  which the FHI kind of highlights. But imagine if I took that and I said this is the meta X risk
 
      01:27:38.640  and you should only work on this problem. It seems like it's one, it's going to be heuristic,
 
      01:27:43.760  but two, it ignores the fact that other people may see things that I haven't seen and other people
 
      01:27:51.120  are as capable of identifying problems and working on them as I am, as you are.
 
      01:27:56.480  And we shouldn't limit other people's creativity by saying there's a small class of problems
 
      01:28:03.760  and you can choose amongst these and that's all you can do. You should do.
 
      01:28:07.200  It's interesting. It seems to tie back to the assumptions of models, right? Because if you have
 
      01:28:15.680  basically this model of the most important problems to work on, that's what it is. It's a model
 
      01:28:20.640  based on the expected impact of certain interventions. And what we should be looking at is the assumptions
 
      01:28:28.080  of that model. And that model can be very useful, right? If people are set on pursuing a career in
 
      01:28:34.080  medicine, say, or development economics, it can be very useful to know within those disciplines,
 
      01:28:40.320  what's the most impactful thing that is currently supported by the evidence? Is it like being a
 
      01:28:46.160  doctor in Rwanda, for example, or being a doctor back in the UK? Is it working on deworming or
 
      01:28:52.320  distributing bed nets? What have you? But it's all about the assumptions contained in that model.
 
      01:28:57.600  So as opposed to saying these are the most important problems to work on, we should be saying,
 
      01:29:03.040  based on the things we've looked at and current forecasts about the future and what
 
      01:29:11.200  economies look like right now, here's our best estimates for these various different areas.
 
      01:29:16.320  And then one can say, okay, if I'm going to choose to adopt the assumptions of this model,
 
      01:29:22.160  then I'm going to take this advice on board. But then there's always room to say, you know,
 
      01:29:26.160  these assumptions only can be... There's no way they capture the future 20 years from now,
 
      01:29:32.240  because there's going to be new ideas, there's going to be interventions that come about,
 
      01:29:35.040  which completely disrupt everything. And then I'm going to have to reevaluate.
 
      01:29:39.680  And so just like, you know, 80,000 hours and EA gives impact evaluations for various careers,
 
      01:29:49.120  these are all just conditional models again. And so we should just be criticizing the
 
      01:29:55.040  conditions under which these sort of models hold. We should probably wind it down.
 
      01:30:00.560  This is certainly not going to appease all of our critics, but there will be more writing
 
      01:30:06.880  hopefully coming out. And we have the crossover episode with Finn and Luca, which I'm really
 
      01:30:13.520  looking forward to. And so they're going to be pushing us pretty hard, I think. But this is a
 
      01:30:19.280  fun just kind of like scattershot. Shooting the shit, getting too drunk for my own.
 
      01:30:24.240  Shoot the shit and getting drunk. So who's better than that?
 
      01:30:26.160  All right, man. Well, that was fun. Let's talk.