This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. ACX has an unofficial subreddit, Discord, and bulletin board, and in-person meetups around the world. 95% of content is free, but for the remaining 5% you can subscribe here. In this week’s news:
1: GiveWell asks me to signal-boost a new contest with $20,000 prize - help them change their minds about some of their cost-effectiveness analyses. This is something I hear people complain about a lot, so I hope some of you will complain about it to the people who will give you $20K for doing that.
2: ACX Grants winner Roger’s Bacon asks me to broadcast the following message:
Seeds of Science, a journal publishing speculative and non-traditional scientific articles, would like to offer itself as a peer-reviewed publishing platform for any analyses using the ACX reader survey data. Visit the website to learn more or contact us at firstname.lastname@example.org
3: ACX Grants winner Trevor Klee asks me to broadcast the following message:
Just launched a crowdfunding/crowdinvesting campaign to cover the rest of my costs of my cat trial for the improved version of cyclosporine. Here's a blog post explaining it/how I've spent money so far, and here's the direct link to the crowdinvesting campaign. Disclaimer [that] investing in any biotech venture is risky, so nobody should be investing their rent money or looking for safe returns.
I would go further and say you should think of this as charitable support for Trevor and his interesting work, and consider any returns at all a bonus.
4: Alexandros Marinos continues examining and arguing against my ivermectin post, and has gotten to the main point - the argument about worms potentially explaining seemingly positive results. People seem to twist my words whenever I express an opinion on this, so I won’t unless/until I’m willing to write a full post, but check it out.
5: Michael Trazzi has an interview with Katja Grace on AI forecasting, part of which is Contra Scott Alexander On Slowing Down AI Progress (a reference to my post Why Not Slow AI Progress?) I am not sure she is actually contra me - I meant for the post to be an overview of different opinions rather than a strong defense of one side - but it’s interesting and worth a read (or, if you’re that kind of person, a watch or a listen)
6: Gary Marcus has a response to my recent AI bet. I want to make it clear that whatever the merits of my bet or his arguments, Google did not “snooker” me. They had no part in this: I went around begging for someone to run my prompts through PARTI and Imagen, one of their employees asked their bosses’ permission and then agreed to do so, and ran them exactly as I asked. Any fault is entirely mine. I’m insisting on this pretty hard because I’m grateful that Google will sometimes respond to random requests by amateurs, and accusing them of deliberate deception in response burns their willingness to do that. As for everything else: I wrote “without wanting to claim that Imagen has fully mastered compositionality, I think it represents a significant enough improvement to win the bet, and to provide some evidence that simple scaling and normal progress are enough for compositionality gains”, I stick to the “some evidence” claim, I feel like I was pretty open about exactly how much/little evidence it was (Google sent me ten examples per prompt, I showed you four representative ones, but the extra six don’t change much). I agree Marcus makes some useful common sense claims on how sure to be after five examples.
7: Sorry about the weather today for the Bay Area meetups! We are still planning to hold the Berkeley meetup at the Rose Garden Inn starting at 1, and SF started at 11 and has announced their rain plan here. Next week will be meetups in Amman on the 20th, Philadelphia on the 22nd, Paris on the 23rd, Zurich + Edinburgh + Pittsburgh + Copenhagen on the 24, and London + Perth + Waterloo on the 25th - see details here.
Open Thread 242