Funding Strategy Week
Marginal Funding Week
Donation Election
Pledge Highlight
Donation Celebration
Nov 12 - 18
Marginal Funding Week
A week for organisations to explain what they would do with marginal funding. Read more.
Dec 23 - 31
Donation Election
A crowd-sourced pot of funds will be distributed amongst three charities based on your votes. Continue donation election conversations here.
$25 598 raised
Dec 16 - 22
Pledge Highlight
A week to post about your experience with pledging, and to discuss the value of pledging. Read more.
What a wonderful piece! I've always wondered why some people choose not to share their donations. Being perceived as a "bragger" in exchange for potentially influencing people around you to donate, always sounded like a good trade-off. Your points clarified a bunch of things here. Thank you!
Probably depends on how you describe it and frame it. How do you explain why you are telling them this? If you’re willing, you might do a trial on this. Do something like divide your clients into two random groups and send this message to half. See if you observe any difference (try to keep track of the numbers as well as the more qualitative outcomes like how they respond to the card)
Interesting idea! 1. I recommend a different name, when I saw this I assumed it was about pledging around left wing causes 2. I feel like the spirit of the pledge would be to increase the 10% part with inflation? If you get a pay raise in line with inflation it seems silly to have to give half of that, since your real take home pay is unchanged. Even the further pledge is inflation linked
Dec 23 - 31
Donation Celebration
Add a heart to the banner to show that you’ve completed your annual donations. You can also comment saying where you donated.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Adult film star Abella Danger apparently took an class on EA at University of Miami, became convinced, and posted about EA to raise $10k for One for the World. She was PornHub's most popular female performer in 2023 and has ~10M followers on instagram. Her post has ~15k likes, comments seem mostly positive. I think this might be the class that @Richard Y Chappell🔸 teaches? Thanks Abella and kudos to whoever introduced her to EA!
I didn't want to read all of @Vasco Grilo🔸's post on the "meat eating" problem and all 80+ comments, so I expanded all the comments and copy/pasted the entire webpage into Claude with the following prompt: "Please give me a summary of the authors argument (dot points, explained simply) and then give me a summary of the kinds of push back he got (dot points, explained simply, thematised, giving me a sense of the concentration/popularity of themes in the push back)" Below is the result (the Forum team might want to consider how posts with large numbers of comments can be read quickly): * The author claims that saving lives in developing countries might cause more harm than good in the short term because: * When people are saved from death, they consume animal products * The suffering of farm animals (especially chickens and farmed fish/shrimp) from being raised and killed outweighs the happiness of the human life saved * Using specific calculations, they estimate that one average person causes 15.5 times more animal suffering than human happiness globally (with higher ratios in some countries) * The author specifically criticizes two organizations: * GiveWell (for granting $1.09 billion without considering animal welfare impacts) * Ambitious Impact (for helping start 8 organizations that save lives without considering animal impacts) * The author suggests these organizations should: * Be more transparent about why they ignore animal welfare effects * Focus on interventions that don't increase mortality (like mental health) * Offset harm to animals by funding animal welfare projects Main themes in the pushback (ordered by rough frequency/engagement): Moral/Philosophical Objections (Most Common): * Rejecting the premise that saving human lives could be net negative * Viewing it as morally repugnant to let children die because they might eat meat * Arguing that we shouldn't hold people responsible for future choices they haven't made y
I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo's recent post arguing that some of GiveWell's grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I'll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1] Let's say that a moral decision process is dogmatic if it's completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes. A central example of a dogmatic belief is: "Making a single human happy is more morally valuable than making any number of chickens happy." The corresponding moral decision process would be, given a choice to spend money on making a human happy or making chickens happy, spending the money on the human no matter what the number of chickens made happy is. Non-dogmatism rejects this decision-making process on the basis that it is dogmatic. (Caveat: this seems fine for entities that are totally outside one's moral circle of concern. For instance, I'm intuitively fine with a decision-making process that spends money on making a human happy instead of spending money on making sure that a pile of rocks doesn't get trampled on, no matter the size of the pile of rocks. So maybe non-dogmatism says that so long as two entities are in your moral circle of concern -- so long as you assign nonzero weight to them -- there ought to exist numbers, at least in theory, for which either side of a moral trade-off could be better.) And so when I see comments saying things like "I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative", I'm like... really? There's no empirical facts that could possibly cause the trade-off to go the other way?   Rejecting dogmatic beliefs requires more work. Rather than deciding that one side of a trade-off is better than the other no matter the underlyin
Would it be feasible/useful to accelerate the adoption of hornless ("naturally polled") cattle, to remove the need for painful dehorning? There are around 88M farmed cattle in the US at any point in time, and I'm guessing about an OOM more globally. These cattle are for various reasons frequently dehorned -- about 80% of dairy calves and 25% of beef cattle are dehorned annually in the US, meaning roughly 13-14M procedures. Dehorning is often done without anaesthesia or painkillers and is likely extremely painful, both immediately and for some time afterwards. Cattle horns are filled with blood vessels and nerves, so it's not like cutting nails. It might feel something like having your teeth amputated at the root. Some breeds of cows are "naturally polled", meaning they don't grow horns. There have been efforts to develop hornless cattle via selective breeding, and some breeds (e.g., Angus) are entirely hornless. So there is already some incentive to move towards hornless cattle, but probably a weak incentive as dehorning is pretty cheap and infrequent. In cattle, there's a gene that regulates horn growth, with the hornless allele being dominant. So you can gene edit cattle to be naturally hornless. This seems to be an area of active research (e.g.). So now I'm wondering, are there ways of speeding up the adoption of hornless cattle? If all US cattle were hornless, >10M of these painful procedures would be avoided annually. For example, perhaps you could fund relevant gene editing research, advocate to remove regulatory hurdles, or incentivize farmers to adopt hornless cattle breeds? Caveat: I only thought and read about all this for 15 minutes.
Isn't mechinterp basically setting out to build tools for AI self-improvement? One of the things people are most worried about is AIs recursively improving themselves. (Whether all people who claim this kind of thing as a red line will actually treat this as a red line is a separate question for another post.) It seems to me like mechanistic interpretability is basically a really promising avenue for that. Trivial example: Claude decides that the most important thing is being the Golden Gate Bridge. Claude reads up on Anthropic's work, gets access to the relevant tools, and does brain surgery on itself to turn into Golden Gate Bridge Claude. More meaningfully, it seems like any ability to understand in a fine-grained way what's going on in a big model could be co-opted by an AI to "learn" in some way. In general, I think the case that seems most likely soonest is: * Learn in-context (e.g. results of experiments, feedback from users, things like we've recently observed in scheming papers...) * Translate this to appropriate adjustments to weights (identified using mechinterp research) * Execute those adjustments Maybe I'm late to this party and everyone was already conceptualising mechinterp as a very dual-use technology, but I'm here now. Honestly, maybe it leans more towards "offense" (i.e., catastrophic misalignment) than defense! It will almost inevitably require automation to be useful, so we're ceding it to machines out of the gate. I'd expect tomorrow's models to be better placed to make sense of and use of mechinterp techniques than humans are - partly just because of sheer compute, but also maybe (and now I'm into speculating on stuff I understand even less) the nature of their cognition is more suited to what's involved.