This idea is half-baked; it has some nice properties but doesn't seem to me like a solution to the problem I most care about. I'm publishing it because maybe it points someone else towards a full solution, or solves a problem they care about, and out of a general sense that people should publish negative results.

Many risky activities impact not just the person doing the activity, but also bystanders or the public at large. Governments often require ability to compensate others as a precondition for engaging in the risky activity, with requirements to have car insurance to drive as a common example.

In most situations, this works out fine:

  1. A competitive insurance market means that customers aren't overcharged too much (since they'll switch insurance providers to whoever estimates their risk as being the lowest).
  2. Accidents are common enough that insurance companies that are bad at pricing quickly lose too much money and adjust their prices upwards (so customers aren't undercharged either).
  3. Accidents are small enough that insurance companies can easily absorb the losses from mispriced insurance.
  4. Accidents are predictable enough that insurers can price premiums by driver moderately well.
  5. Drivers are common enough that simple prediction rules make more sense than putting dedicated thought into how much to charge each driver.

Suppose we adjust the parameters of the situation, and now instead of insuring drivers doing everyday trips, we're trying to insure rare, potentially catastrophic events, like launching nuclear material into orbit to power deep space probes. Now a launch failure potentially affects millions of people, and estimating the chance of failure is well worth more than a single formula's attention.

As a brief aside, why try to solve this with insurance? Why not just have regulators decide whether you can or can't do something? Basically, I believe that prices transmit information, and allow you to make globally correct decisions by only attending to local considerations. If the potential downside of something is a billion dollars, and you have a way to estimate micro-failures, you can price each micro-failure at a thousand dollars and answer whether or not mitigations are worth it (if it reduces the microfailures by 4 and costs $5,000, it's not worth it, but if it's 6 microfailures instead then it is worth it) and whether or not it's worth doing the whole project at all. It seems more flexible to have people codesign their launch with their insurer than with the regulator.

But the title of this post is Secondary Risk Markets. If there's a price on the risk that's allowed to float, then it's also more robust; if Geico disagrees with State Farm's estimates, then we want them to bet against each other and reach a consensus price, rather than the person doing the risky activity just choosing the lowest bidder. [That is, we'd like this to be able to counteract the Unilateralist's Curse, by replacing an inexploitable market with an exploitable one.]

For example, suppose Alice want to borrow a fragile thousand dollar camera to do a cool photoshoot, and there's some probability she ruins it. By default, this requires that she post $1,000, which she probably doesn't want to do on her own; instead she goes to Bob, who estimates her risk at 5%, and Carol, who estimates her risk at 9%. Alice offers to pay Bob $51 if he puts up the $1,000, with $1 in expected profit for Bob.

If Bob would say yes to that, Carol would want to take that bet too; she would like to give Bob $51 in exchange for $1,000 if Alice breaks the camera, since that's $39 in expected profit for Carol. And Bob, if actually betting with Carol, would want to set the price at something more like $70, since that equalizes the profit for the two of them, with the actual price depending on how quickly their price changes in the presence of counterparties.

So how do we get Alice to pay $70 instead of $51?[1] 

My proposed scheme is as follows:

  • The regulator estimates the potential damages of the activity at $X.[2]
  • The person doing the activity has to post $2X as a bond. X is held in reserve to pay out to victims if the risk materializes and returned if the risk is avoided. The other X is posted for sale on a secondary market as 'synthetic risk' according to some predefined schedule (ask offers from 0% to 100%, effectively, tho the poster can immediately buy as much as they like). 
  • Anyone can post a bond to create more synthetic risk tied to the activity and sell it for whatever price they like.

Continuing the example, Bob now needs to post $2k, with $1k set aside for the 'natural risk' and the other $1k for sale. Bob immediately buys the $50 of synthetic risk that's available at prices of 5% or below, and now Carol would want to buy the $40 of synthetic risk available between 5% and 9%, which was listed for sale for $2.80. Note that Bob anticipates gaining 80 cents on that deal with Carol, if he sticks with his initial estimate; 5% of the time he has to pay out $40 which costs him $2, but Carol paid $2.80 for it. Bob can create more of that synthetic risk and put it for sale, for as long as he and Carol want additional exposure to that bet. [And Dan, if he also thought Carol was overestimating the risk, could create bonds and sell them to Carol as well.]

But this is how we get a market price of 7% risk, not how Alice pays $70. Either Bob could have set the initial insurance contract with Alice such that she pays the market price once it stabilizes,[3] or Bob could have had Alice pay to post the synthetic risk bonds, or so on. From society's perspective, it doesn't matter too much whether Alice pays for the increased risk or Bob does, so long as someone enabling the action does.

Importantly, this system transfers from worse predictors to better predictors instead of from optimists to pessimists. If Alice in fact only breaks the camera in 5% of worlds, on expectation Carol will be transferring resources to Bob. 

I think this is basically magnifying the consequences of rare events, both for good and for ill. More money riding on the derivatives of the events will sharpen society's estimate (and ensure there's a public record of it), and also cause insurers to 'update faster'. Rather than simply $1k moving based on Bob's estimate of 5% risk, $1.4k will be moved if Bob and Carol end up with a $400 side bet. This also means that the standard laws against insurance fraud will be that much more strained by the pressures of profit; here I propose mandating as much synthetic risk as actual risk, but you could easily swap out the 2 in 2X for any other number, or the supply schedule for another schedule, or so on.  (Most of the listed offers will never be bought under this scheme, for example.) This makes risky business like this more capital-intensive, which is the main price being paid for the increased accuracy in estimates.

  1. ^

    The overall hope, here, is to have Alice only do projects that make sense once the unilateralist's curse has been accounted for. If borrowing the camera is worth $80 to Alice, this project should still happen even tho it seems like a bad idea to Carol; if it's worth $60 to Alice, this project shouldn't happen even tho it seems like a good idea to Bob.

  2. ^

    This could probably also be done with a market rather than by fiat, but I don't think that will materially change the analysis in this post.

  3. ^

    Doesn't this take a lot of the value of this away from Alice, since she was hoping to de-risk this whole affair and now the risk is back? Some, since Alice now has to pay an unknown premium instead of a known one, but it's still substantially better than having to post the $1k herself. But also if we imagine this as being one-off events like rocket launches instead of much more common events like getting in the car, it seems much more plausible that Alice can call off the photoshoot if the premiums get too high.

New Comment
4 comments, sorted by Click to highlight new comments since:

So how do we get Alice to pay $70 instead of $51?

I don't understand why we want to do this. You mention the "unilateralist's curse", but this sounds more like the "auction winner's curse", which I would expect an insurer to already be taking into account when setting their prices (as that's the insurer's entire core competency).

Bob immediately buys the $50 of synthetic risk that's available at prices of 5% or below, and now Carol would want to buy the $40 of synthetic risk available between 5% and 9%, which was listed for sale for $2.80

I think this is specifically the part of the proposal I don't understand. How much of the synthetic risk is available to buy at each price? If you buy $1 of synthetic risk for $0.05, does that mean you get $1.00 if Alice breaks the camera, and $0.00 if Alice does not?

My extremely tentative guess is that the proposal is that there is $1000 bet on "Alice will break the camera" spread uniformly across the probability distribution, and then anyone can buy the NO position at that price point. So buying the first $1 of a $1000 pool of risk costs $0.0005, the next $1 costs $0.0015, and so on for a cost of $1.25 to buy the first $50 (5%) of the risk, and $2.80 to buy the next $40 (to move from 5% to 9%). And then the remaining $910 of the risk ends up unsold (and I'm not sure what happens to it).

But this just sounds like "Alice gets her insurance from Bob at $51 and also Bob and Carol make a side bet where Bob pays Carol $40.00 if Alice breaks the camera and Carol pays Bob $2.80 if Alice does not break the camera". I think Bob and Carol are both pretty happy that you've legalized prediction markets / binary options, but I don't see how their side bet affects Alice except insofar as maybe Carol's bet changes Bob's assessment of the risk of the camera being broken.

I don't understand why we want to do this.

I want Alice to have help choosing what things to do and not do, in the form of easily understandable prices that turn uncertain badness ("it'll probably be fine, I probably won't break the camera") into certain costs ("hmm, am I really going to get $70 worth of value from using this camera?").

I am most interested in this in contexts where self-insurance is not reasonable to expect. Like, if some satellite company / government agency causes Kessler Syndrome, they're not going to be able to pay back the rest of the Earth on their own, and so there's some temptation to just ignore that outcome; "we'll be bankrupt anyway." But society as a whole very much does not want them to ignore that outcome; society wants avoiding that outcome to be more important to them than the survival of their company, and something like apocalypse insurance seems like the right way to go about that.

But how do you price the apocalypse insurance? You don't want to just kick the can down the road, where now the insurance company is trying to look good to regulators while being cheap enough for customers to get business, reasoning "well, we'll be bankrupt anyway" about the catastrophe happening.

You mention the "unilateralist's curse", but this sounds more like the "auction winner's curse",

I think those are very similar concepts, to the point of often being the same.

which I would expect an insurer to already be taking into account when setting their prices (as that's the insurer's entire core competency).

I probably should have brought up the inexploitability concept from Inadequate Equilbria; I'm arguing that mistaken premiums are inexploitable, because Carol can't make any money from correcting Bob's mistaken belief about Alice, and I want a mechanism to make it exploitable.

Normally insurers just learn from bad bets after the fact and this is basically fine, from society's point of view; when we're insuring catastrophic risks (and using insurance premiums to determine whether or not to embark on those risks) I think it's worth trying to make the market exploitable.

If you buy $1 of synthetic risk for $0.05, does that mean you get $1.00 if Alice breaks the camera, and $0.00 if Alice does not?

Yes, the synthetic risk paying out is always conditional. The sketch I have for that example is Bob has to offer $10 of synthetic risk at each percentage point, except I did the math as tho it were continuous, which you can also do by just choosing midpoints. So there's $10 for sale at $0.55, another $10 for $0.65, and so on; Carol's $40 for $2.80 comes from buying $0.55+0.65+0.75+0.85 (and she doesn't buy the $0.95 one because it looks like a 5 cent loss to her). That is, your tentative guess looks right to me.

The $910 that goes unsold is still held by Bob, so if the camera is wrecked Bob has to pay themselves $910, which doesn't matter. 

As you point out, Bob pays $1.25 for the first $50 of risk, which ends up being a wash. Does that just break the whole scheme, since Bob could just buy all the required synthetic risk and replicate the two-party insurance market? Well... maybe. Maybe you need a tiny sales tax, or something, but I think Bob is incentivized to participate in the market. Why did we need to require it, then? I don't have a good answer there. (Maybe it's easier to have mandatory prediction markets than just legalizing them.)

I probably should have brought up the inexploitability concept from Inadequate Equilbria; I'm arguing that mistaken premiums are inexploitable, because Carol can't make any money from correcting Bob's mistaken belief about Alice, and I want a mechanism to make it exploitable.

Ah, this clarifies the intent.

As you point out, Bob pays $1.25 for the first $50 of risk, which ends up being a wash. Does that just break the whole scheme, since Bob could just buy all the required synthetic risk and replicate the two-party insurance market?

As stated, I think so, but I think that's a pretty easy fix. I think it works if, any time Bob is going to offer such an insurance policy to Alice at a certain rate, Bob must also offer at least 1x as much synthetic risk for sale under this scheme for all amounts higher than that rate to any other insurer in the market. I'm not sure whether there's directly any precedent for forcing something to be sold at a certain price to one party as a condition for selling it to another party, though it rhymes with right of first refusal.

One note is that, by forcing Bob to sell risk to Carol, you're also creating a "cause Alice to break the camera" bounty for Carol. At a 1X multiplier and nontrivial probabilities of loss, that bounty is probably not a very powerful force, but it's something to be aware of if scaling to much higher multipliers.

In any case, cool mechanism!

Edit: Also, if your concern is that Bob probably doesn't have the ability to pay out if Alice breaks the camera, forcing Bob to collect additional money now from Carol in exchange for owing even more money if Alice breaks the camera doesn't necessarily help. Maybe if you make Bob put all of the risk that Carol bought into escrow though? Does put a certain floor on the cost of insurance that has less to do with risk and more to do with interest rates and Carol's willingness to lose small amounts of money in expectation to lock up the liquidity of her competitor. Again seems unlikely to be much of an issue in practice for >1% chances of insurance paying out and a 1X multiplier.

the problem I most care about

I want markets for x-risks, basically. Suppose someone wants to train an AI and they're pretty sure it'll be fine, but only pretty sure. How do we aggregate estimates to figure out whether or not the AI should be trained? Seems like we should be able to have a risk market. [so8res proposes apocalypse insurance here with "premiums dependent on their behavior", but what's the dependence? Is it just set by the regulator?]

But the standard problems appear; on the "we all die" risk, the market doesn't exist and so people who bet on risk never get paid out.

You could imagine instead using a cap-and-trade system, where, say, only 3 AIs can be trained per year and companies bid for one of the permits, but it seems like this is still tilted towards "who thinks they can make the most money from success?" and not "who does the group think is least likely to fail?". You could have instead an explicit veto/permission system, where maybe you have 11 votes on whether or not to go thru with an AI training run and you need to buy at least 6 'yes'es, but this doesn't transfer resources from worse predictors to better predictors, just from projects that look worse to projects that look better.

And so I think we end up with, as johnswentworth suggests over here, needing to use regular liability / regular insurance, where people are betting on short-term questions  that we expect to resolve ("how will this lawsuit by rightsholders against generative AI companies go?") instead of the unresolvable questions that are most existentially relevant.