The Crazy Train
A conversation with the high council of effective altruism
We need only 10 new paid subscribers to reach our goal: 26 by the new year! We hope that anyone who can will pitch in to help The New Critic survive into 2026. You can read more about funding our project here — a $30 annual subscription costs as much as a couple paperbacks or movie tickets and goes a long way to supporting excellent conversations like the one below.
Elan Kluger is a 22-year-old writer from Michigan and editor of The New Critic studying History at Dartmouth College.
Noah Birnbaum is a 21-year-old Philosophy undergraduate at the University of Chicago. He co-runs UChicago Effective Altruism and writes at the blog Irrational Community.
Amos Wollen is a 21-year-old postgrad student at Oxford. He blogs at Going Awol.
Matthew Adelstein is a 22-year-old undergraduate studying Philosophy at the University of Michigan. He also writes the blog Bentham’s Bulldog.
I am not an effective altruist. Effective altruism is defined below as a “movement that’s just broadly trying to do good, as effectively as you possibly can,” a movement which thinks of altruism as a question of calculations. Matthew Adelstein, author of the prominent blog Bentham’s Bulldog calls EA, “the purest form of altruism,” as it is interested in what is maximally effective in saving lives and reducing harm, not in the kind of altruism that makes you feel good about yourself.
I am not an effective altruist but I admire the movement greatly. I meet people across religions — Catholics, Jews, Protestants, Muslims — and rarely do I feel affronted or moved by their moral seriousness. Those religions require a tithe — giving away a certain percentage of their annual income — but few do so. They have elegant rhetoric and beautiful buildings but their actions rarely match up.
The same is not true of EA. While few admire their rhetoric — calculations of efficiency are less than elegant — and they do not have many buildings to speak of, members of EA do what they say they will do: they give away their money. They spend a great deal of time thinking about how they will give away their money and then they do. I admire them for that.
EA is one of the defining philosophical movements of our age and one of the rare movements that gathers intelligent, uncynical people together. It is small, but growing, and has chapters sprinkled at most major universities. In London, I found myself at an EA adjacent party, with free alcohol (great) and many, many young people — alas, mostly men. But I was and remain curious about this movement and am trying to learn as much as I can. I am still collecting notes.
Three EA friends and acquaintances help me along the path in my exploration. I hope you enjoy our conversation.
This conversation has been edited for length and clarity.
ELAN KLUGER You would define yourselves as effective altruists, is that correct? What does that mean?
NOAH BIRNBAUM Yeah, I think I would. I would define effective altruism as part of this movement that’s just broadly trying to do good, as effectively as you possibly can. It’s kind of intuitive based on the name. And I just try to take ideas seriously and be like, oh, wow, I see an opportunity to do lots and lots of good, and this is in some ways better than a lot of the other ways that people are doing good. So I will do that.
AMOS WOLLEN I’d describe myself as an effective altruist or less than that — just EA aligned — in the sense that I believe in the kind of philosophy Noah just sketched where you’re trying to do the most good you can, or at least do better with fewer resources, and to try and get things done more efficiently in the best way possible. That said, because I’m a student, I can only afford small recurring donations to animal welfare charities. I’m looking for a job that’s broadly in the space, but those are hard to come by.
KLUGER How did you first hear of effective altruism?
BIRNBAUM I first heard of effective altruism when I was on my gap year, and I was reading a lot, and I was reading a bunch of random subjects, but I was also reading a particularly large amount of philosophy. And I thought, wow, this consequentialist type reasoning seems really good. But it’s kind of unfortunate that nobody actually takes philosophy very seriously. People are kind of like, oh yes, I have this or that belief, but nobody actually acts on these beliefs. Very weird. And then I found that there was a group that was doing both of these things. They were taking consequentialist reasoning seriously and they tried to take general philosophy seriously. So if someone makes a critique, they will actually change how they live in certain ways. The main people that I was hearing about at this point were Sam Harris and I got interested in Tyler Cowen, who also started writing about effective altruism then. I sort of just slid down the sort of path, I just read a shit ton of articles myself. And I was like, oh yeah, this is pretty convincing. I should take this pretty seriously with respect to what I’m going to do in college or how I should think about, I don’t know, my career and stuff.
KLUGER You were on a gap year at a Yeshiva, right?
BIRNBAUM Yes.
KLUGER Did that play into it? Yeshiva as a kind of ethical community, theoretically, and then being disappointed?
BIRNBAUM I was definitely disappointed with the courses insofar as I didn’t find them that intellectually challenging, or I just thought that they were talking about things that were probably fake, in some sense. We were discussing Jewish law, and I wasn’t that religious at the time. It didn’t make that much sense. I wanted to be reading things that I thought were interesting. And so I ended up reading a bunch of that. I don’t think I was particularly disappointed by the ethics.
I think that there’s something interesting about religion that people do take ethics a lot more seriously than, say, the average person — I still was a little disappointed at just how little people take this stuff seriously. When I was religious, I was like I’m just gonna try to learn the Talmud every single second, because this is what God wants me to do. Or I’m going to become a rabbi, because that’s what you should do, if you’re really committed. And I don’t understand how people can have all these claims about being super committed but not actually take it very seriously. I think that there were some people there that did that more than most. But yeah, I think that there’s a pretty wide variety of people who take ideas seriously in various degrees.
KLUGER Amos, how did you get into effective altruism?
WOLLEN I think my way in was pretty similar, where there wasn’t just one kind of eureka moment, it was more of a drip, drip, drip. I was quite young when I first heard of Peter Singer’s drowning child thought experiment, I must have been about thirteen. Roughly the idea is you’re walking past a pond and you see a small child drowning in a shallow pond. And the thought is you’re wearing very expensive shoes, or at least moderately priced shoes, it would be a real shame if you had to get them wet, but unfortunately, no one else is around, and so you’re the only one who can fish the child out. It looks like you have a strong moral reason to fish the child out. And then the analogy is for similar amounts of money, you might be able to save children across the seas with effective, say, anti-malarial charities. And assuming you don’t think that physical distance by itself matters morally, it looks like you’re going to have a very hard time showing what’s fundamentally different about these cases. And so it looks like if you’re obligated to save the drowning child, on the face of it, you’re obligated to save children overseas in similar conditions, if there are charities that let you do that. And then the effective altruist movement is just about creating and locating charities that are just that effective.
When I was younger, I was also interested in animal welfare. I was vegan from quite an early age. I had a two-year lapse. There were no good reasons for that. I was just morally weak. I came up with rationalizations, but they weren’t very good in retrospect. And then, when I got more seriously into philosophy around the age of seventeen, these ideas circled back to me. It’s quite hard to avoid them in philosophy, and they just seemed obviously correct to me in some form or another. And so from that age on, early university years, I’ve been pretty much EA.
KLUGER Was there any religious background in how you grew up?
WOLLEN Yeah. I asked to become a Catholic when I was just below the age of fourteen, and I was confirmed around that age. My parents had different flavors of Christian, but I was never raised fully within the church. We’d go on holidays. I’d go some Sundays. It wasn’t drilled into me, more of a choice. And I ended up deconverting for complicated philosophical reasons when I was about seventeen. The Catholic Church does have its own strong emphasis on charity. Part of its own effective altruist mission is that it wants to prevent lots of people from going to hell, and so it sees that as part of its mission. If you really believe in eternal hell, you might think, yeah, that’s a seriously bad thing that we should be taking steps to prevent. And then it also has a mission with regard to climate change and the global poor and immigration and lots of other things that effective altruists are often concerned with. When I was that age, though, I never really made a connection, and so there wasn’t a strong springboard from one to the other.
KLUGER Were there people that you knew growing up and considered morally serious in a way that you’ve desired to emulate?
WOLLEN Yeah, in particular the priest who did my confirmation, who’s now dead, Father Tom Herbst. He was a very formative person, I think, where you could just tell that he was a moral saint and he was very, very morally serious. He took things lightly, but he was someone that you could trust with anything. I don’t remember too many people like that growing up apart from some teachers and, of course, my parents.
BIRNBAUM The only people that I knew were Orthodox Jews, so the only ideas that you could take that seriously yourself were Orthodox Jewish ideas. And yeah, there were people to varying degrees that were serious. But it didn’t actually seem like what drove them was the ideas. It just seemed like a lot of it was extremely memetic, so the people that they were around were actually affecting it. It didn’t seem as though people were like, “there’s an idea on the table. I should then take this seriously, because it’s an idea and I believe in it and I believe in it wholeheartedly.”
KLUGER Do you think there’s a world where you stay Modern Orthodox or does the restless inquiry not exactly fit with that world?
BIRNBAUM So I don’t think that the restless inquiry does not fit with that world. Even after I sort of wasn’t religious, I was doing this sort of like modern orthodoxy, like maybe social orthodoxy, you could call it, where I would keep shabbos — I’m vegan, so I’m basically keeping kosher. And then I just stopped doing this somewhat recently. I think that this is just largely pragmatic. Did I actually feel it was going to be a good thing in my life that was going to make me happy? And so if I just thought that at some point, I was like, hey, I’m definitely missing out on a bunch of this stuff. I could definitely go back to that lifestyle. As for the belief stuff, I think I just don’t buy it. And so that would be pretty hard to go back to.
KLUGER When you think of a sort of sociological portrait of EA, how would you describe it? What kinds of people do you tend to find and what unifies them besides the shared belief in doing good, better?
WOLLEN I think what surprised me going to my first real effective altruist conference was that the people were much more normal than I expected. I went to one that was for Christians specifically, I’m not a Christian myself, but I was giving a talk that was broadly related. It was based in London, it was just a bunch of very normal, well-meaning, lovely professionals who just wanted to do something good, who had an idea for charity or were earning money in the private sector but wanted to figure out ways of donating it and wanted to meet other people with similar ideas. I was expecting a bunch of malnourished, rationalist nerds who couldn’t have normal conversations. And I love those kinds of people, they’re my kind of people. But there were just fewer of those people than I was expecting. I guess my sense of the sociology, and Noah can correct me if he thinks this is wrong, is that they tend to be, in general, more quantitative than the average person. They tend to be more cerebral, which can be a good thing or a bad thing, but in conversation they tend to be often quite young — at the stage in their lives when many people go into other morally-charged movements — the age when lots of people get into socialism or climate advocacy or all of those kinds of things, young people full of moral energy. That’s roughly my sense, but Noah would have more things to add.
BIRNBAUM Yeah, lots of that strikes me as right. I think that you have different sorts of caricatures of people in the effective altruism movement, and you can describe them all sort of differently. So I think one was the type that Amos was describing, which is this sort of malnourished rationalist who has a bunch of these weird ideas about decision theory and things — that you can probably just solve most problems like this. Some of these people are literally just these well-meaning professionals that maybe read a little bit, were pretty compelled by the ideas, and were like, okay, how can I take this a little more seriously? Some people seem like for their entire lives they’re just going to be dedicated to this movement because they think it’s literally the most important thing ever. And these are the types of people that you’d expect in other fields to just do really, really well, they seem like the people that are very charismatic, but also well-meaning, and they’re super agentic. I think that’s capturing a bunch of the space. I would also say that maybe there tends to be a bit of contrarianness or something like this. People want to have the take that’s different and interesting and is also right. I don’t think that they’re being contrarian for contrarianism’s sake or something like this, but I think that a lot of people have this sense that the average person that agrees with someone is just not going to provide that much value to a conversation, while someone who disagrees in a productive way might have more impact on the conversation in a positive way, so it’s good to be contrarian generally.
WOLLEN Speaking of malnourished rationalist nerds, somebody has joined the conversation.
MATTHEW ADELSTEIN Okay. Sorry, I was asleep.
KLUGER Sleep is important. Let me ask you, Matthew — you wrote a piece for The New Critic about how you kind of got into effective altruism, so we can skip past that — on a different note, what is your understanding of the sort of sociology of EA?
ADELSTEIN Lots of people get into EA because they’re interested in making the world a better place, and they realize that some ways of making the world a better place are vastly more effective than other ways, so it sort of makes sense that if you’re concerned about making the world a better place, you want to focus on the ways that do that more effectively rather than less effectively. I think other people get into it where they’re the sort of person who likes to optimize for stuff, and then they begin with the effectiveness part and then think, oh, you know, altruism seems good, and so some people I think are sort of like altruism first and then effectiveness second, and other people are effectiveness first and altruism second, but is there some more specific feature of the sociology that you’re curious about?
KLUGER Well, are there other movements that you deeply respect their moral seriousness?
WOLLEN There are lots of movements that I am fairly strongly opposed to that I think are really morally serious, and I respect the kind of people who are involved in them. For example, I have a lot of secondhand respect for the pro-life movement. I’m very pro-choice, but I used to be pro-life. What strikes me about these people is that they’re quite heavily mobilized about preventing harms for fetuses purely on the basis, it seems, of philosophical arguments. There’s nothing especially emotionally compelling about early embryos, they’ve just been convinced by abstract arguments, roughly speaking, and then they’re interested in investing a lot of time, and sometimes money, and often social capital, and their reputation, and sometimes friendships to prevent this thing that they think is really bad. Even though I think their beliefs are misguided, I view what they’re doing as very morally serious. In general, people who are very interested in this or that political cause outside of party politics in general, which tends to be more of an entertaining sport that people watch, I think I have a lot of respect for the ways in which they’re serious about what they do.
KLUGER With regards to the pro-life movement that you brought up, do you find it difficult to interact with them? Most of the standard partisan Democrats I know, for example, can’t even imagine that pro-life people are morally serious. They would say, how could you restrict a woman’s choice? How do you deal with a movement when you say, wow, I respect that you take this idea very seriously, and yet I radically disagree with you?
WOLLEN Part of it for me is that I used to share their view, and then I changed my mind. So I know what it’s like to hold that particular set of beliefs. Beyond that, I think, especially now I’m doing a post-grad in philosophy and talk to lots of people who are interested in philosophy, you come across people with quite out there ethical views on a wide range of issues, and you get into the habit of just asking people what their argument is, and if they have one, and they can sort of lay out the reasons that they have, it becomes quite easy to take them seriously as somebody who wants to do good, even if you just think they’re completely wrong about everything. The strategy is just to hear people out, not to be snipey and dismissive, and just hear what people have to say in defense of the things that they believe.
KLUGER Okay, I want to read a quote from a piece Amos wrote about Phil Christman, and then we’ll discuss. So it’s from the last paragraph, he says:
I have listened to a fair number of left-wing “theory” podcasts (which have, by the grace of God, deterred me from reading “theory” books on which I assume they are based) prattling in self-assured monotone about the dignity and indignities of various types of work; but it’s not often that you are smacked in the face by the moral seriousness of these convictions. In this respect, one Christman is worth a thousand nerds. If you are in the market for an all-purpose book that will mount a convincing, first-order case for laxer immigration laws, social democracy, top-to-bottom wealth redistribution, etc., and you take yourself to have objections to these ideas in need of an answer, this book is not that book. But not every book is that book — nor should they be. Some books are supposed to put you at ease with scrumptious, jocular prose and then violently shake you out of your self-centered, bourgeoisie akrasia, say true things in pretty ways, and force you to take seriously those things which you already believed. And those books are no less worth your time.
Amos, I’m curious about this because I went and I listened to Phil Christman on effective altruism and he was very critical. He seems like a very nice guy, so he put it in very positive terms, but his criticism was essentially that effective altruism is altruism for non-altruistic people. It’s like trying to teach not nice people to be nice. That was roughly how he put it, but that nonetheless the real lesson to learn from effective altruism is that EAs tithe in a way Christians only claim they do. In that sense he was impressed.
What are your thoughts on all this?
WOLLEN Yeah, great. So I definitely agree with Christman’s second point. In some respects, EA’s are doing something that Christians are aspiring to do. On the flip side, I think effective altruists can and have learned from religious communities in this regard about tithing. I mean, the 10% figure that some effective altruists float is like a workable percentage of your income that you might aspire to one day give to an effective charity. That maps on to the tithing requirements in Islam. In the Baha’i faith, it’s even more demanding, it’s 19% of your discretionary income. So I think there’s a lot to learn in both directions.
On the first point that he made, EA being an attempt to teach people who are not altruistic to be altruists or something like that — but in one sense, that’s good, right? So it’s good to make people morally better who are not that way in the first place. I think maybe where he’s getting this idea is something like how some effective altruists talk about the idea of earning to give, so they might approach people who are making a lot of money and aren’t doing very much with it, stuffing it under their mattress, and they might say, look, you can do some good with this money, and maybe you should try and seek a higher paying, more stressful job in order to do even more good. You can do a lot with the money that you have. And so he might be thinking, oh, that’s just what effective altruism is. It’s just reaching out to very evil, very rich billionaires and getting them interested in doing these showy donations to effective charities to suit their own consciences. But I think because of the stuff we discussed earlier, that’s just not descriptively true of the sociology of the movement. Billionaires are a tiny fraction of the population in general, and so you’d expect them to be a tiny fraction of the population of the effective altruists. So even if you think that 100% of billionaires are evil, that won’t be an indictment of the movement as a whole.
ADELSTEIN Now, regarding whether EA is teaching non altruists how to do altruism, I don’t think this is right at all. I think that if you’re concerned in some domain, if you care about succeeding in some domain, then you should look at how effective different actions are in that domain. It’s like how if you want to, you know, improve your physical health, then what it makes sense to do is look at how much different actions improve your physical health rather than just doing a random selection of things that you’ve heard are healthy. If you want to, you know, increase the amount of money in your retirement account, it makes sense to invest in places that will increase the expected amount of money in your retirement account rather than just invest in random stocks with no concern with whether they’ll increase the amount of money in your retirement account. Now, if you’re concerned about helping people, then you should try to help people as effectively as possible. If there are different ways that you can help people, some of them are vastly more effective than others. It is five times better to help five people than help one person. There are four more people who you get to help if you take that course of action. And insofar as you value the people who you’re helping equally, then there’s a very strong moral reason to try to help more people rather than fewer people. And this is something that sounds very intuitive when you say it at the high level of abstraction, but almost no one does it. Almost no one, when they’re looking through their charitable donations or what career they’re going to take, thinks seriously about how effective different ones will be. So in that sense, I don’t think EA is sort of this weird aberration that takes non-altruism and sort of tries to get non-altruistic people to be altruistic. I think EA is the purest form of altruism. It’s altruism that’s solely dedicated to helping people as effectively as possible rather than to promoting whatever cause you happen to feel some sort of emotional connection to.
BIRNBAUM Yeah, I would say that this is not true. I really agree with what Matthew was saying earlier when he was sort of like, there are some people in the movement who are just sort of into optimizing things, right? Maybe, you know, people in my life that are sort of, I don’t know, want to maximize their longevity or they go to the gym every single day and try to maximize the precise workouts that are going to make them gain the most muscle mass or something like this, you know, these real strong optimizers. So I do think you just get a sense — putting these people into altruism — of, oh, here’s another thing that you can maximize. And, in that sense, they do. But it also just gets these regular do-gooders that are just like, oh, I’m actually pretty sympathetic to the idea that doing more good is actually more good. And so in that sense, I don’t think that this point is very correct on the tithing points. Yeah, I think like 10% was sort of a random number or whatever, it could have been 11%. It could have been 9%. It could have been like, you know, 17%, whatever. Doesn’t matter, like that much. 10% is like a round number, a bunch of religions use it. You might as well sort of like just do it. And so that’s why probably someone came up with it at some point and people were like, yeah, that sounds right. Yeah, I agree.
KLUGER That leads to the next question, which is this problem of akrasia. Let’s say you get someone, you sit them down, you get them to read all your Substacks and they’re persuaded. They can’t think of strong counterarguments and they don’t feel that they’re even missing anything. But they don’t take action. What would you recommend next?
ADELSTEIN Well, I guess the first thing I would recommend is that they begin changing their behavior so that it corresponds with their values. Now, in terms of how I would convince them to do this, one thing is if they sincerely want to change their behavior, I mean, just like go on the internet and then start donating some money to effective charities, it’s very easy to do it. You can do it in five minutes and you can have it be a recurring donation so you don’t keep having to build up the willpower to do it. If they can’t get themselves to do that, well, I mean, one thing is spending time around other morally motivated EAs might increase the probability of them doing that, spending time reading more about the arguments so that they internalize them at a deeper level, reading more about how a lot of people have this idea that giving to effective charities is a real sacrifice. And my read of the charitable giving studies is that if you give more to charity, generally you’ll be happier because you’ll know that your life is making a big difference to other people’s lives, and a life that’s self-centered is generally not as happy as a life that’s focused on promoting the interests of others. If none of these convince people, I would say to read Reasons and Persons, the arguments for why you should count other people’s interests as equal to your own and why that’s kind of rationally mandatory. So if you’re failing to do that, you’re just being straightforwardly irrational. But I mean, yeah, it’s a hard thing to do to convince people why to do the things that they know they should do. In lots of cases, if you’re sufficiently convinced that you ought to do a thing, then generally you’ll be able to talk yourself into doing it to at least some degree. So you know, start small, start giving some amount of money to effective charities, the more the better, but that would be my basic recommendation.
BIRNBAUM I think that when we’re talking about this person who is sort of convinced by the ideas but doesn’t want to do them, there are two different types of people that we could be talking about. One is the person who’s like, yes, I actually just agree with all of this stuff. And I’m not going to do it, and I don’t really want to do it, or something like that. And then there’s another person that says, I agree with all these ideas, I actually want myself to be doing it, but it’s just too hard. And in practice, I don’t find myself doing it. So I think that these are sort of two different things that we can separate.
For the second person, I would just say that this is kind of just like going to the gym. It’s like you want to get up from bed and just go to the gym right away. And this is actually kind of hard to do motivationally for various reasons. And so you could just tell them a bunch of advice — join a club or get a gym buddy — the sort of thing that Matthew was saying where you get another person and have them motivate you a little bit or be in a circle where everyone’s motivated and going to say, hey, you know, did you do this thing this morning? Did you do this thing? You can sort of say at the end of the year, hey, did you donate this money? Did you get this thing, or did you finish that project that you were trying to finish by this date or whatever? I think that this is pretty motivationally helpful.
For the sort of person that believes the thing but doesn’t really want to act in accordance with the position and obviously doesn’t actually do it, maybe I would just not spend that much effort trying to convince this one person because it’s not that helpful on the margin. I would just say, hey, if you believe a bunch of these things, yeah, you should probably do one of the readings, perhaps you just haven’t internalized this stuff. But if they just read tons and tons of these arguments and still don’t feel very compelled to actually do the thing, then I’m not sure that from an outside perspective, without breaking their autonomy, I can actually do anything about it.
KLUGER Yeah. Amos, do you have anything to add?
WOLLEN I agree with all of the foregoing, especially that people should read Reasons and Persons very slowly and meditate on the words [laughs]. I read in a biography of Derek Parfit, the author of Reasons and Persons, that at least one Buddhist monastery was reading sections of Reasons and Persons and meditating on its words. I think that’s very simple and I think everyone should do it. I think, yeah, I just want to hammer home again the point Matthew made about recurring donations. We three might think, you know, it’s good to, say, change your diet and make that a regular thing that you do every day or consider pushing your career in a certain direction that will end up entailing that there’s one type of action that you’re choosing to do every single day. And I get how, you know, people would feel like that’s a lot of willpower, that’s a lot of effort. But, you know, the good thing about recurring donations is that it’s a one-time action that takes five minutes and then you can forget about it. And so, on the order of things that you can encourage yourself to do, if you can’t bring yourself to do one kind of thing that requires exerting effort every single day, okay, cool. But, you know, there are things that don’t require exerting willpower every single day that will just take five minutes.
KLUGER This is the last question, it’s a two-parter. I read the thought experiment of the philosopher Bernard Williams where he says: you see two people drowning and one is your mother. You can’t give a moral explanation for why to save your mother as opposed to the other person, and he calls that “one thought too many.” Where do you personally draw your line for maximizing efficiency, where is it one thought too many?
BIRNBAUM I think that there are two explanations that you could give for this. One is intellectual and one is a less intellectual one. The intellectual one is this point about moral uncertainty. Lots of people have this idea that you have these special obligations to people that are closer to you, and this just gives you some sort of moral reason to prioritize these people to some degree. I largely don’t buy this view. I think that a lot of these intuitions come from places that are evolutionary or culturally evolutionary or something like this. But I don’t put absolutely zero weight on them. Lots of smart people believe these things. And so, just by virtue of that, I just think that my family should have more moral weight. And how you even do this sort of trade-off of — I’m going to care about my family versus I’m going to care about my efficiency — I don’t really know. I haven’t thought about this a lot. I don’t have a very strong take. But my intuition would be, like, unless there’s a ridiculous amount of people, I’m just going to save my family.
The second point is that there is this thought experiment in EA called “the crazy train,” where there are various stops that you could take. So, you know, the initial thing that you do is choosing global health charities as opposed to other charities. And that’s going to be a hundred or a thousand times more effective. And you’re, like, oh, yeah, I kind of buy that. And then if I consider animals even a little bit, that should make me care way more about animals. And so I should donate my money to animal charities. And then you take that stop on the crazy train. And then you could start talking about future people. You could do anything to prevent extinction, that should vastly outweigh basically anything else that you could do. And so you take that stop on the crazy train. And then you could start talking about, like, future digital shrimp. And that’s, like, the next stop on the crazy train. No, that’s not actually true. But I think that broadly sketches a point where there’s a bunch of stops that you could take and you could still be an effective altruist.
And yeah, in the family case, I largely think that the reason that’s actually motivating me at the end of the day — why I would pick my mom over some random stranger — is probably not this point about moral uncertainty. I think it’s a nice intellectual point that probably actually favors doing that. But the thing that’s going to really drive me is just going to be, like, I’m going to take one of these stops on the crazy train. And you know, morally, ideally, I should be taking this other stop. And you can both do that and consider yourself an effective altruist.
WOLLEN I have a few inchoate thoughts about Williams’s “one thought too many” idea. So there’s a good book by Larry Temkin called Doing Good in a World of Need. And one thing that he remarks is that in discussions of this “one thought too many” argument, some philosophers get a bit uncomfortable. And the reason they get a bit uncomfortable is that it hits a bit close to home because they know that in all of these kinds of cases, they probably would have one thought too many, right? They’d see their mother drowning and they’d see two people drowning and they’d, you know, start thinking, should I be morally impartial or should I care about my family more? But you might think this is just a dispositional hazard of thinking too much, of just thinking about the disposition. So by analogy, if somebody works in law enforcement, they’re going to be much more suspicious of people because it’s just part of the job. And so, you know, they might see a perfectly ordinary person and they might suddenly think, oh, is this person a criminal? And you might think in that case, that’s one thought too many, but it’s probably just a dispositional hazard. They have to think about it all the time for the kind of project they’re engaged in, their day job. And so they end up having one thought too many. I guess the idea that’s motivating Williams is that you shouldn’t just care about which types of actions are best. You should care about which type of character is best. And the thoughts that jump uninvited into your mind when you confront a moral dilemma are an indication of what your character is like. And so, yeah, it could be that we should care about both things. On the one hand, we should care about which action is best. And on the other hand, we should care about what type of character should I have? and then use which thoughts are jumping into my mind unbidden as a proxy for what kind of character I have. And I agree with Noah that you can and should save your mother in the case you described.
KLUGER So where do you draw the line personally in terms of EA? Just in your habits? My habits? Where do you minimize efficiency or something like that?
WOLLEN Where do I minimize efficiency? Maybe one thing that I do that might minimize efficiency is that there are maybe quite compelling arguments for picking whatever charity you think has the highest expected value and putting all your eggs in that basket, whereas I’m quite loosey-goosey and uncertain, and so I’d be more inclined to put multiple eggs in different baskets even if they don’t all have the highest expected return. So I might think that there are definitely charities that are more effective than say the Against Malaria Foundation, but I might just think there’s something really intuitively satisfying about the idea that when I die, even if all of the stuff about shrimp and animal welfare and far future people is crazy, the worst thing I’ve done is saved a life. I mean, it’ll be a long time before I’ve actually saved a life because I haven’t donated anywhere near enough, I haven’t had enough money to do that, but you get my point, right? So that’s one way in which it manifests, but other than that, I’m quite sort of happy on the crazy train. It’s where I spend most of my time. I like thinking about crazy things like the possibility that AIs might suffer one day, blah, blah, blah, and I don’t feel uncomfortable about that.
ADELSTEIN I think when Bernard Williams was giving this objection, I don’t think the objection is correct. I agree with Noah that there are sensible reasons to save your mother instead of a stranger. Part of these reasons have to do with moral uncertainty. Some of these reasons will have to do with if you have a high credence in a moral theory on which you have especially strong special obligations. But the idea that you shouldn’t even think about this morally, that if you reflect on your moral reasons to do this, you’re kind of behaving wrongly — what you’re supposed to do is just immediately save the member of your family — I don’t agree with that. I think one ought to consider the moral reasons for and against particular actions. If one doesn’t do that, what they’ll end up doing is just whatever they feel a really strong emotional pull to do. And it’s not always correct to do whatever you feel a very strong emotional pull to do. Suppose that instead of it being your parent versus, you know, another person, it was your parent versus five hundred other people or five thousand, surely there’s some point at which you ought to save the other people. And yet, by Williams logic, it seems like, you know, anytime you’re thinking about these trade-offs, you’re acting in a way that’s morally defective. And so I don’t think this is right. I think it makes sense to think things through morally, rather than just sort of relying on whatever you have a really strong non-moral motivation to do.
In terms of how this intersects with effective altruism, well, I don’t know that it does that much. I mean, I think people have all sorts of projects that they care about in their life. And we can argue whether people ought to prioritize effective charitable giving over their other projects. But in practice, basically, no one — virtually no one — subordinates all their other projects to effective charitable giving. And so in practice, I think what people can be expected to do is basically give helping other people some non-trivial portion of their life. And you know, I think that’s the sort of ask that one should have when recommending that other people be effective altruists, that they take high impact careers. But maybe the Peter Singer argument is right, that you ought to spend every waking moment doing as much good as possible. But in practice, no one’s going to do that. And so I think there are people who spend too much time arguing about this really extreme implication and spend much less time arguing about the very practical, reasonable conclusion that you can make lots of people’s lives better at a fairly minimal personal cost. And virtually no one’s doing this. So more people ought to do it.











next episode inv https://ceselder.substack.com/ + https://adaptivegood.substack.com/
I only like this because of an Amos Wollen reference