Director & Movement Builder - AI Safety ANZ
Advisory Board Member (Growth) - Giving What We Can
The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).
I thought it might be useful to spell that out.
That seems fair enough!
Hi Johannes! Thanks for the suggestion :) I'm not sure i'd want it in the middle of a video call, but maybe in a forum context like this could be cool?
Putting my EA Forum comment here:
I'd like to make clear to anyone reading that you can support the PauseAI movement right now, only because you think it is useful right now. And then in the future, when conditions change, you can choose to stop supporting the PauseAI movement.
AI is changing extremely fast (e.g. technical work was probably our best bet a year ago, I'm less sure now). Supporting a particular tactic/intervention does not commit you to an ideology or team forever!
There have been multiple occasions where I've copy and pasted email threads into an LLM and asked it things like:
I really want an email plugin that basically brute forces rationality INTO email conversations.
If you're into podcasts, the Very Bad Wizards guys did an ep on this essay, which I enjoyed: https://verybadwizards.com/episode/episode-227-a-terrible-master-david-foster-wallaces-this-is-water
Alcoholics are encouraged not to talk passed Liquor Stores. Basically, physical availability is the biggest lever - keep your phone / laptop in a different room when you don't absolutely need them!
If GPT5 actually comes with competent agents then I expect this to be a "Holy Shit" moment at least as big as ChatGPT's release. So if ChatGPT has been used by 200 million people, then I'd expect that to at least double within 6 months of GPT5 (agent's) release. Maybe triple. So that "Holy Shit" moment means a greater share of the general public learning about the power of frontier models. With that will come another shift in the Overton Window. Good luck to us all.
The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).
I thought it might be useful to spell that out.
I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist.
Gemini did a good job of summarising it:
This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown:
What it Doesn't Mean:
What it Does Mean:
Analogy:
Imagine a pebble thrown into a still pond. The pebble represents the external situation, and the ripples represent your emotional response. While you can't control the pebble (the external situation), you can control the ripples (your reaction).
Benefits:
Here are some additional points to consider:
Something someone technical and interested in forecasting should look into: can LLMs reliably convert peoples claims into a % of confidence through sentiment analysis? This would be useful for Forecasters I believe (and rationality in general)