7 Comments
User's avatar
Kathy Smith's avatar

Are you familiar with Professor Temple Grandin’s methods to humanely slaughter cattle and alleviate their suffering? The shrimp welfare project is just as laudatory.

Joey Bream's avatar

Hugely admire BB. Wouldn't have wrote this without his influence https://substack.com/@joeybream/note/p-178448931?utm_source=notes-share-action&r=1r9nm9

Julian Nelson's avatar

I admire your passion for improving animal welfare, but there’s a problem with your portrayal of eating meat. If factory farming were the only method of producing meat, your critique of meat consumption as such holds. But it’s not. You fail to mention cage-free, pasture raised, grass fed, regeneratively raised, etc. There are better alternatives, even if they’re not perfect, and things are moving in the right direction.

But that’s not because of some sentience-based utilitarian calculus that concludes eating meat is wrong, it’s because people know intuitively that we shouldn’t torture animals before eating them. I agree with your defense of animal welfare, but your critique of eating meat conflates industrial abuse with responsible husbandry.

Darshan Venkatesan's avatar

Your arguments don't appeal to me specifically. While I do agree that suffering, especially extreme suffering like you describe, is bad for the sufferer, my response would be why should I care? And I think the reason I had that thought when reading this is that you refer to the sufferer as a distinct entity - it's easy to rationalize to myself that this is happening to someone who isn't me, and that I should only care about morality insofar as it promotes my best interests.

Luckily I do have a completely egoistic reason I'm drawn to effective altruism, and it's just that I find the reincarnation theory plausible. While you think about shrimp being boiled alive, I imagine that one day, I myself will be that shrimp who is boiled alive. Thinking about the problem in this way has led me to actually care - of course I would donate a dollar to prevent myself being boiled alive 15000 times!

Jarrod's avatar

Have you addressed arguments against animal sentience? E.g., https://curi.us/2545-animal-welfare-overview

If they’re right, you’re wrong about “the unfathomable effectiveness of the SWP” and you’re wasting your donations. Will you write an essay responding to the article’s criticisms or participate in a debate with it’s author? (He has a debate policy here: https://www.elliottemple.com/debate-policy)

Bentham's Bulldog's avatar

Yes https://benthams.substack.com/p/against-yudkowskys-implausible-position?utm_source=publication-search

Now note: the argument for shrimp welfare just depends on the idea that it's not extremely implausible that shrimp are conscious. That's all you need to think it has very high expected value. Now, if you want to know why I think it's not that unlikely that shrimp are conscious see https://benthams.substack.com/p/betting-on-ubiquitous-pain

Jarrod's avatar

I really appreciate your reply. Thank you for sharing those links, but I couldn’t find anything in them that answers the argument I linked to previously (which is rooted in the Popperian/Deutschian view of intelligence and claims that consciousness requires general intelligence (it also criticizes Yudkowsky’s view of intelligence/mind design space)).

Also, I think your method of evaluating ideas (in this case, about shrimp consciousness) is mistaken.

Your argument rests on evaluating the *likelihood* or *plausibility* of shrimp consciousness. You use phrases like "not extremely implausible," "not that unlikely," and you argue that the expected value is high even with a low probability. In the second article you linked, you also mention how "data...provided more and more evidence for consciousness."

This suggests a framework where evidence and arguments add weight, support, probability, or credence to an idea. This way of thinking is a mistake, as explained here by epistemologist Elliot Temple: https://criticalfallibilism.com/introduction-to-critical-fallibilism/#decisive-arguments & https://criticalfallibilism.com/yes-or-no-philosophy-summary/

Instead, ideas should be evaluated in a binary, pass/fail way: an idea is either *refuted* (we know of a decisive criticism against it) or it is *non-refuted* (we don't).

This is why the expected value calculation doesn't work. You're multiplying an enormous number (trillions of shrimp) by a probability that’s assigned to a refuted idea.