Épiphanie Gédéon

Wiki Contributions

Comments

Remember, while the ARA models are trying to survive, there will be millions of other (potentially misaligned) models being deployed deliberately by humans, including on very sensitive tasks (like recursive self-improvement). These seem much more concerning.

So my main reason for worry personally is that there might be an ARA that is deployed with just the goal of "just clone yourself as much as possible" or a goal similar to this. In this case, the AI does not really have to survive particularly among others as long as it is able to pay for itself and continue spreading, or infecting local computers by copying itself to local computers etc... This is a worrisome scenario in that the AI might just be dormant, and already hard to kill. If the AI furthermore has some ability for avoiding detection and adapting/modifying itself, then I really worry that its goals are going to evolve and get selected to converge progressively toward a full takeover (though it may also plateau) and that we will be completely oblivious to it for most of this time, as there won't really be any incentive to either detect or fight this thoroughly.

Of course, there is also the scenario of an chaos-GPT ARA agent, and this I worry is going to kill many people without it ever truly shutting down, or if we can, it might take a while.

All in all, I think this is more of a question of costs-benefits than if how likely it is. For instance, I think that implementing Know Your Customer policy on all providers right now could be quite feasible and would slow down the initial steps of an ARA agent a lot.

I feel like the main crux of the argument is:

  1. Whether an ARA agent plateaus or goes exponential in terms of abilities and takeover goals.
  2. How much time it will take for an ARA agent that takes over to fully take over after being released.

I am still very unsure about 1, I could imagine many scenarios where the endemic ARA just stagnates and never really transforms into something more.

However, I feel like for 2. you have a model that it is going to take a long time for such an agent to really take over. I am unsure about that, but even if that were the case, my main concern is that once the seed ARA is released (and it might be only very slightly capable of adaptation at first), the n it is going to be extremely difficult to shut it down. If AI labs advance significantly with respect to superintelligence, implementing pause AI might not be too late, but if such an agent has already been released there is not going to be much we can do about it.

Separately, I think the "natural selection favors AIs over humans" argument is a fairly weak one; you can find some comments I've made about this by searching my twitter.

I would be very interested to hear more. I didn't find anything from a quick search on your twitter, do you have a link or a pointer I could read more on for counterarguments about "natural selection favors AIs over humans"?

Thanks a lot for the kind comment!

To scale this approach, one will want to have "structural regularizers" towards modularity, interoperability and parsimony

I am unsure of the formal architecture or requirements for these structural regularizers you mention. I agree with using shared building blocks to speed up development and verification. I am unsure credit assignment would work well for this, maybe in the form of "the more a block is used in a code, the more we can trust it"?

Constraints on the types of admissible model code. We have strongly advocated for probabilistic causal models expressed as probabilistic programs.

What do you mean? Why is this specifically needed? Do you mean that if we want to have a go-player, we should have one portion of the code dedicated to assigning probability to what the best move is? Or does it only apply in a different context of finding policies?

Scaling this to multiple (human or LLM) contributors will require a higher-order model economy of some sort

Hmm. Is the argument something like "We want to scale and diversify the agents who will review the code for more robustness (so not just one LLM model for instance), and that means varying level of competence that we will want to figure out and sort"? I had not thought of it that way, I was mainly thinking of just using the same model, and I'm unsure that having weaker code-reviewers will not bring the system down in terms of safety.

Regarding the Gaia Network, the idea seems interesting though I am unclear about the full details yet. I had thought of extending betting markets to a full bayesian network to have a better picture of what everyone believe, and maybe this is related to your idea. In any case, I believe that conveying one's full model of the world through this kind of network and maybe more may be doable, and quite important to solve some sort of global coordination/truth seeking?

Overall I agree with your idea of a common library and I think there should be some very promising iterations on that. I will contact you more about colaboration ideas!

So the first image is based on AI control, which is indeed part of their strategies, and you could see constructability as mainly leading to this kind of strategy applied to plain code for specific subtasks. It's important to note constructability itself is just a different approach to making understandable systems.

The main differences are :

  1. Instead of using a single AI, we use many expert-like systems that compose together which we can see the interaction of (for instance, in the case of a go player, you would use KataGo to predict the best move and flag moves that lost the game, another LLM to explain the correct move, and another one to factor this explanation into the code)

  2. We use supervision, both automatic and human, to overview the produced code and test it, through simulations, unit tests, and code review, to ensure the code makes sense and does its task well.

Thinking more on this and to friends who voiced that non-trees/dag dialog makes it non appealing to them also (especially in contexts where I was inviting them to have dialogue together), would there be interests in making a PR for this feature? I might work on this directly

Came here to comment this. As it is, this seems like just talking on discord/telegram, but with the notion of publishing it later. What I really lack when discussing something is the ability to branch out and backtrack easily and have a view of all conversation topics at once.

That being said, I really like the idea of dialogues, and I think this is a push in the right direction, I have enjoyed the dialogue I read so far greatly. Excited to see where this goes

Reposting (after a slight rewrite) from the telegram group:

This might be a nitpick, but to my (maybe misguided) understanding, alignment is only a very specific subfield of ai safety research, which basically boils down to "how do I give a set of rules/utility function/designs that avoid meta or mesa optimizations that have dramatic unforseen consequences" (This is at least how I understood MIRI's focus pre-2020)

For instance, as I understand it, interpretability research is not directly alignment research. Instead, it is part of the broader "AI safety research" (which includes alignment research, interpretability, transparency, corrigeability, ...)

With that being said, I do think that your points apply for renaming "AI safety research" to Artifical Intention Research still hold, and I would be very much in favor of it. It is more self-explanatory, catchier, does not require doom-assumptions to be worth investigating which I think matters a lot in public communication.

(

Also, I wouldn’t ban orgies, that won’t work

I'm not sure on that point anymore. Monkeypox' spread seems so slow, and the fact that only men gets infected while the proportions of women who have sex with men having sex with men is not that narrow. I wonder if the spread of Monkeypox is not mostly driven by orgies/big events at the moment, and if it's not the best moment to do this. Though it might be a bit too late now, I think it'd have worked two-four weeks ago)

People can totally understand all of this, and also people mostly do understand most of this

On the other hand, the very fact that we say Monkeypox is spreading within the community of Men having Sex with Men is symbolic of the problem, to me. Being MSM and being in this monkeypox-spreading community is very correlated sure, but not synonymous, the cluster we're talking about is more specific: It's the cluster of people participating in orgies, have sex with other strangers several times a week, etc.

Seeing how we're still conflating the two in discussions I've seen, I can understand the worries that this will reflect on the non-straight community, though I do agree that this messaging makes little sense.

Also

people mostly do understand most of this

I know four people who went to a sex-positive/kind of orgy event recently, this did not seem to be a concern to them (neither to the event itself, its safety consideration only included COVID). I also know someone who still had hookups two-three weeks ago.

It seems like the warning against monkeypox may be failing in the very community we're talking about.

The more I think about it, the more I wonder if boiling the potatoes infused them with the peels and increased significantly the quantity of solanine I was consuming. An obvious confounder is that whole-boiled potatoes are less fun to eat than in more varied forms, so it doesn't discriminate with the "fun food" theory

Thanks a lot for the estimate, I'll look into recent studies of this to see what I find!

Load More