Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality. (Longer bio.)

I generally feel more hopeful about a situation when I understand it better.

Sequences

AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs

Comments

I wouldn't call this "AI lab watch." "Lab" has the connotation that these are small projects instead of multibillion dollar corporate behemoths.

This seems like a good point. Here's a quick babble of alts (folks could react with a thumbs-up on ones that they think are good).

AI Corporation Watch | AI Mega-Corp Watch | AI Company Watch | AI Industry Watch | AI Firm Watch | AI Behemoth Watch | AI Colossus Watch | AI Juggernaut Watch | AI Future Watch

I currently think "AI Corporation Watch" is more accurate. "Labs" feels like a research team, but I think these orgs are far far far more influenced by market forces than is suggested by "lab", and "corporation" communicates that. I also think the goal here is not to point to all companies that do anything with AI (e.g. midjourney) but to focus on the few massive orgs that are having the most influence on the path and standards of the industry, and to my eye "corporation" has that association more than "company". Definitely not sure though.

What's the chance of a 2nd LessOnline?

Um, one part of me is (as is not uncommon) really believes in this event and thinks it's going to be the best effort investments Lightcones' ever made (though this part of me currently has one or two other projects and ideas that it believes in maybe even more strongly), that's part of me is like "yeah this should absolutely happen every year", though as I say I get this feeling often about projects that often end up looking different to how I dreamed them when they finally show up in reality. I think that part would feel validated by the event turning out to be awesome and people finding it was worthwhile to come. Then there's the question of how much resources Lightcone actually has and whether we'll successfully fundraise and whether this will be one of the few projects we're investing a few staff-months in a year from now. I think my probabilities just went from 80% to 50% to 99% to... 30%. Overall it depends on how good this event is, which varies on a log scale.

I think there are worlds where we do it again and invest less effort into it, also there's worlds where we do it again and invest more effort into it. I think there's also a bunch of worlds where we're happy about this event but try a subtly different one next time (e.g. me and a teammate generated like 5 other serious event contenders before this one, including things more like workshops or academic conferences than like large festivals, and perhaps we'll try a different thing next). I think I like that this event is essentially open-invite and trying to be more big-tent, and I hope to do more things like this, so that even if we change what sorts of events we run anyone will just be able to buy a ticket. There's also worlds where we stop exploring having an events team for our campus and stop running events.

As one datapoint, in 2021 I organized a 60-person private event called the Sanity & Survival Summit for rationalist and folks working professionally on x-risk stuff, and I thought we'd maybe make that a yearly thing, and a year later we sort of last-minute/impromptu ran another version of it called Palmcone (it was in the Bahamas) for 80-100 people, and then we made the Lightcone Offices to try and get a more permanent version of the EA/x-risk things in the Bay, and then we scrapped the whole thing as we uninvested in the professional x-risk/EA ecosystem. That's a possible trajectory things could take.

  1. I anticipate the vast majority of people going to each of the events will be locals to the state and landmass respectively, so I don't think it's actually particularly costly for them to overlap.
  2. That's unfortunate that you are less likely to come, and I'm glad to get the feedback. I could primarily reply with reasons why I think it was the right call (e.g. helpful for getting the event off the ground, helpful for pinpointing the sort of ideas+writing the event is celebrating, I think it's prosocial for me to be open about info like this generally, etc) but I don't think that engages with the fact that it left you personally less likely to come. I still overall think if the event sounds like a good time to you (e.g. interesting conversations with people you'd like to talk to and/or exciting activities) and it's worth the cost to you then I hope you come :-)

I'm probably missing something simple, but what is 356? I was expecting a probability or a percent, but that number is neither.

Does anyone know if it's typically the case that people under gag orders about their NDAs can talk to other people who they know signed the same NDAs? That is, if a bunch of people quit a company and all have signed self-silencing NDAs, are they normally allowed to talk to each other about why they quit and commiserate about the costs of their silence?

Preface: I think this comment will be pretty unpopular here.

I think this is a very unhelpful frame for any discussion (especially so the more high-stakes it is) for the reasons that SlateStarCodex outlines in Against Bravery Debates, and I think your comment would be better with this removed.

Added: I appreciate the edit :)

Yep! My guess is I will send one out to people who bought tickets next week, along with various spreadsheets for signing up to activities with (e.g. giving a lightning talk).

(I personally strongly prefer slack for a bunch of UI reasons including threading and especially because I always find the conversational culture on discord disorienting, though I know manifest has a community discord so it might be worth using discord.)

Launched a few days ago, the plan is:

  • Kids tickets are $50
  • There's daycare purchasable on-site from 10am to 7pm, for like $10/hour if you book ahead of time or $30/hour if you use it on-the-day
  • If you want connection to a nanny for outside of those hours we have a service that can help with that at $45/hour.

Happy to get feedback on this, still figuring out what exactly helps parents and how to set it up right.

Ben Pace14d114

[Added April 28th: In case someone reads my comment without this context: David has made a number of worthwhile contributions to discussions of biological existential risks (e.g. 1, 2, 3) as well as worked professionally in this area and his contributions on this topic are quite often well-worth engaging with. Here I just intended to add that in my opinion early on in the covid pandemic he messed up pretty badly in one or two critical discussions around mask effectiveness and censoring criticism of the CDC. Perhaps that's not saying much because the base rate for relevant experts dealing with Covid is also that they were very off-the-mark. Furthermore David's June 2020 post-mortem of his mistakes was a good public service even while I don't agree with his self-assessment in all cases. Overall I think his arguments are often well-worth engaging with.]

I'm not in touch with the ground truth in this case, but for those reading along without knowing the context, I'll mention that it wouldn't be the first time that David has misrepresented what people in the Effective Altruism Biorisk professional network believe[1]

(I will mention that David later apologized for handling that situation poorly and wasting people's time[2], which I think reflects positively on him.)

  1. ^

    See Habryka's response to Davidmanheim's comment here from March 7th 2020, such as this quote.

    Overall, my sense is that you made a prediction that people in biorisk would consider this post an infohazard that had to be prevented from spreading (you also reported this post to the admins, saying that we should "talk to someone who works in biorisk at at FHI, Openphil, etc. to confirm that this is a really bad idea").

    We have now done so, and in this case others did not share your assessment (and I expect most other experts would give broadly the same response).

  2. ^

    See David's own June 25th reply to the same comment.

The first sounds like the sort of thing that turns out to be surprisingly useful (nobody ever gives me health advice). Mm, maybe folks can agree-react to this sentence if they too want to go to such a session?

Load More