I.

Aeon: Post-Empirical Science Is An Oxymoron And It is Dangerous:

There is no agreed criterion to distinguish science from pseudoscience, or just plain ordinary bullshit, opening the door to all manner of metaphysics masquerading as science. This is ‘post-empirical’ science, where truth no longer matters, and it is potentially very dangerous.

It’s not difficult to find recent examples. On 8 June 2019, the front cover of New Scientist magazine boldly declared that we’re ‘Inside the Mirrorverse’. Its editors bid us ‘Welcome to the parallel reality that’s hiding in plain sight’. […]

[Some physicists] claim that neutrons [are] flitting between parallel universes. They admit that the chances of proving this are ‘low’, or even ‘zero’, but it doesn’t really matter. When it comes to grabbing attention, inviting that all-important click, or purchase, speculative metaphysics wins hands down.

These theories are based on the notion that our Universe is not unique, that there exists a large number of other universes that somehow sit alongside or parallel to our own. For example, in the so-called Many-Worlds interpretation of quantum mechanics, there are universes containing our parallel selves, identical to us but for their different experiences of quantum physics. These theories are attractive to some few theoretical physicists and philosophers, but there is absolutely no empirical evidence for them. And, as it seems we can’t ever experience these other universes, there will never be any evidence for them. As Broussard explained, these theories are sufficiently slippery to duck any kind of challenge that experimentalists might try to throw at them, and there’s always someone happy to keep the idea alive.

Is this really science? The answer depends on what you think society needs from science. In our post-truth age of casual lies, fake news and alternative facts, society is under extraordinary pressure from those pushing potentially dangerous antiscientific propaganda – ranging from climate-change denial to the anti-vaxxer movement to homeopathic medicines. I, for one, prefer a science that is rational and based on evidence, a science that is concerned with theories and empirical facts, a science that promotes the search for truth, no matter how transient or contingent. I prefer a science that does not readily admit theories so vague and slippery that empirical tests are either impossible or they mean absolutely nothing at all.

As always, a single quote doesn’t do the argument justice, so go read the article. But I think this captures the basic argument: multiverse theories are bad, because they’re untestable, and untestable science is pseudoscience.

Many great people, both philosophers of science and practicing scientists, have already discussed the problems with this point of view. But none of them lay out their argument in quite the way that makes the most sense to me. I want to do that here, without claiming any originality or special expertise in the subject, to see if it helps convince anyone else.

II.

Consider a classic example: modern paleontology does a good job at predicting dinosaur fossils. But the creationist explanation – Satan buried fake dinosaur fossils to mislead us – also predicts the same fossils (we assume Satan is good at disguising his existence, so that the lack of other strong evidence for Satan doesn’t contradict the theory). What principles help us realize that the Satan hypothesis is obviously stupid and the usual paleontological one more plausible?

One bad response: paleontology can better predict characteristics of dinosaur fossils, using arguments like “since plesiosaurs are aquatic, they will be found in areas that were underwater during the Mesozoic, but since tyrannosaurs are terrestrial, they will be found in areas that were on land”, and this makes it better than the Satan hypothesis, which can only retrodict these characteristics. But this isn’t quite true: since Satan is trying to fool us into believing the modern paleontology paradigm, he’ll hide the fossils in ways that conform to its predictions, so we will predict plesiosaur fossils will only be found at sea – otherwise the gig would be up!

A second bad response: “The hypothesis that all our findings were planted to deceive us bleeds into conspiracy theories and touches on the problem of skepticism. These things are inherently outside the realm of science.” But archaeological findings are very often deliberate hoaxes planted to deceive archaeologists, and in practice archaeologists consider and test that hypothesis the same way they consider and test every other hypothesis. Rule this out by fiat and we have to accept Piltdown Man, or at least claim that the people arguing against the veracity of Piltdown Man were doing something other than Science.

A third bad response: “Satan is supernatural and science is not allowed to consider supernatural explanations.” Fine then, replace Satan with an alien. I think this is a stupid distinction – if demons really did interfere in earthly affairs, then we could investigate their actions using the same methods we use to investigate every other process. But this would take a long time to argue well, so for now let’s just stick with the alien.

A fourth bad response: “There is no empirical test that distinguishes the Satan hypothesis from the paleontology hypothesis, therefore the Satan hypothesis is inherently unfalsifiable and therefore pseudoscientific.” But this can’t be right. After all, there’s no empirical test that distinguishes the paleontology hypothesis from the Satan hypothesis! If we call one of them pseudoscience based on their inseparability, we have to call the other one pseudoscience too!

A naive Popperian (which maybe nobody really is) would have to stop here, and say that we predict dinosaur fossils will have such-and-such characteristics, but that questions like that process that drives this pattern – a long-dead ecosystem of actual dinosaurs, or the Devil planting dinosaur bones to deceive us – is a mystical question beyond the ability of Science to even conceivably solve.

I think the correct response is to say that both theories explain the data, and one cannot empirically test which theory is true, but the paleontology theory is more elegant (I am tempted to say “simpler”, but that might imply I have a rigorous mathematical definition of the form of simplicity involved, which I don’t). It requires fewer other weird things to be true. It involves fewer other hidden variables. It transforms our worldview less. It gets a cleaner shave with Occam’s Razor. This elegance is so important to us that it explains our vast preference for the first theory over the second.

A long tradition of philosophers of science have already written eloquently about this, summed up by Sean Carroll here:

What makes an explanation “the best.” Thomas Kuhn ,after his influential book The Structure of Scientific Revolutions led many people to think of him as a relativist when it came to scientific claims, attempted to correct this misimpression by offering a list of criteria that scientists use in practice to judge one theory better than another one: accuracy, consistency, broad scope, simplicity, and fruitfulness. “Accuracy” (fitting the data) is one of these criteria, but by no means the sole one. Any working scientist can think of cases where each of these concepts has been invoked in favor of one theory or another. But there is no unambiguous algorithm according to which we can feed in these criteria, a list of theories, and a set of data, and expect the best theory to pop out. The way in which we judge scientific theories is inescapably reflective, messy, and human. That’s the reality of how science is actually done; it’s a matter of judgment, not of drawing bright lines between truth and falsity or science and non-science. Fortunately, in typical cases the accumulation of evidence eventually leaves only one viable theory in the eyes of most reasonable observers.

The dinosaur hypothesis and the Satan hypothesis both fit the data, but the dinosaur hypothesis wins hands-down on simplicity. As Carroll predicts, most reasonable observers are able to converge on the same solution here, despite the philosophical complexity.

III.

I’m starting with this extreme case because its very extremity makes it easier to see the mechanism in action. But I think the same process applies to other cases that people really worry about.

Consider the riddle of the Sphinx. There’s pretty good archaeological evidence supporting the consensus position that it was built by Pharaoh Khafre. But there are a few holes in that story, and a few scattered artifacts suggest it was actually built by Pharaoh Khufu; a respectable minority of archaeologists believe this. And there are a few anomalies which, if taken wildly out of context, you can use to tell a story that it was built long before Egypt existed at all, maybe by Atlantis or aliens.

So there are three competing hypotheses. All of them are consistent with current evidence (even the Atlantis one, which was written after the current evidence was found and carefully adds enough epicycles not to blatantly contradict it). Perhaps one day evidence will come to light that supports one above the others; maybe in some unexcavated tomb, a hieroglyphic tablet says “I created the Sphinx, sincerely yours, Pharaoh Khufu”. But maybe this won’t happen. Maybe we already have all the Sphinx-related evidence we’re going to get. Maybe the information necessary to distinguish among these hypotheses has been utterly lost beyond any conceivable ability to reconstruct.

I don’t want to say “No hypothesis can be tested any further, so Science is useless to us here”, because then we’re forced to conclude stupid things like “Science has no opinion on whether the Sphinx was built by Khafre or Atlanteans,” whereas I think most scientists would actually have very strong opinions on that.

But what about the question of whether the Sphinx was built by Khafre or Khufu? This is a real open question with respectable archaeologists on both sides; what can we do about it?

I think the answer would have to be: the same thing we did with the Satan vs. paleontology question, only now it’s a lot harder. We try to figure out which theory requires fewer other weird things to be true, fewer hidden variables, less transformation of our worldview – which theory works better with Occam’s Razor. This is relatively easy in the Atlantis case, and hard but potentially possible in the Khafre vs. Khufu case.

(Bayesians can rephrase this to: given that we have a certain amount of evidence for each, can we quantify exactly how much evidence, and what our priors for each should be. It would end not with a decisive victory of one or the other, but with a probability distribution, maybe 80% chance it was Khafre, 20% chance it was Khufu)

I think this is a totally legitimate thing for Egyptologists to do, even if it never results in a particular testable claim that gets tested. If you don’t think it’s a legitimate thing for Egyptologists to do, I have trouble figuring out how you can justify Egyptologists rejecting the Atlantis theory.

(Again, Bayesians would start with a very low prior for Atlantis, and assess the evidence as very low, and end up with a probability distribution something like Khafre 80%, Khufu 19.999999%, Atlantis 0.000001%)

IV.

How does this relate to things like multiverse theory? Before we get there, one more hokey example:

Suppose scientists measure the mass of one particle at 32.604 units, the mass of another related particle at 204.897 units, and the mass of a third related particle at 4452.767 units. For a while, this is just how things are – it seems to be an irreducible brute fact about the universe. Then some theorist notices that if you set the mass of the first particle as x, then the second is 2πx and the third is 4/3 πx^2. They theorize that perhaps the quantum field forms some sort of extradimensional sphere, the first particle represents a diameter of a great circle of the sphere, the second the circumference of the great circle, and the third the volume of the sphere.

(please excuse the stupidity of my example, I don’t know enough about physics to come up with something that isn’t stupid, but I hope it will illustrate my point)

In fact, imagine that there are a hundred different particles, all with different masses, and all one hundred have masses that perfectly correspond to various mathematical properties of spheres.

Is the person who made this discovery doing Science? And should we consider their theory a useful contribution to physics?

I think the answer is clearly yes. But consider what this commits us to. Suppose the scientist came up with their Extradimensional Sphere hypothesis after learning the masses of the relevant particles, and so it has not predicted anything. Suppose the extradimensional sphere is outside normal space, curled up into some dimension we can’t possibly access or test without a particle accelerator the size of the moon. Suppose there are no undiscovered particles in this set that can be tested to see if they also reflect sphere-related parameters. This theory is exactly the kind of postempirical, metaphysical construct that the Aeon article savages.

But it’s really compelling. We have a hundred different particles, and this theory retrodicts the properties of each of them perfectly. And it’s so simple – just say the word “sphere” and the rest falls out naturally! You would have to be crazy not to think it was at least pretty plausible, or that the scientist who developed it had done some good work.

Nor do I think it seems right to say “The discovery that all of our unexplained variables perfectly match the parameters of a sphere is good, but the hypothesis that there really is a sphere is outside the bounds of Science.” That sounds too much like saying “It’s fine to say dinosaur bones have such-and-such characteristics, but we must never speculate about what kind of process produced them, or whether it involved actual dinosaurs”.

V.

My understanding of the multiverse debate is that it works the same way. Scientists observe the behavior of particles, and find that a multiverse explains that behavior more simply and elegantly than not-a-multiverse.

One (doubtless exaggerated) way I’ve heard multiverse proponents explain their position is like this: in certain situations the math declares two contradictory answers – in the classic example, Schrodinger’s cat will be both alive and dead. But when we open the box, we see only a dead cat or an alive cat, not both. Multiverse opponents say “Some unknown force steps in at the last second and destroys one of the possibility branches”. Multiverse proponents say “No it doesn’t, both possibility branches happen exactly the way the math says, and we end up in one of them.”

Taking this exaggerated dumbed-down account as exactly right, this sounds about as hard as the dinosaurs-vs-Satan example, in terms of figuring out which is more Occam’s Razor compliant. I’m sure the reality is more nuanced, but I think it can be judged by the same process. Perhaps this is the kind of reasoning that only gets us to a 90% probability there is a multiverse, rather than a 99.999999% one. But I think determining that theories have 90% probability is a reasonable scientific thing to do.

VI.

At times, the Aeon article seems to flirt with admitting that something like this is necessary:

Such problems were judged by philosophers of science to be insurmountable, and Popper’s falsifiability criterion was abandoned (though, curiously, it still lives on in the minds of many practising scientists). But rather than seek an alternative, in 1983 the philosopher Larry Laudan declared that the demarcation problem is actually intractable, and must therefore be a pseudo-problem. He argued that the real distinction is between knowledge that is reliable or unreliable, irrespective of its provenance, and claimed that terms such as ‘pseudoscience’ and ‘unscientific’ have no real meaning.

But it always jumps back from the precipice:

So, if we can’t make use of falsifiability, what do we use instead? I don’t think we have any real alternative but to adopt what I might call the empirical criterion. Demarcation is not some kind of binary yes-or-no, right-or-wrong, black-or-white judgment. We have to admit shades of grey. Popper himself was ready to accept this, [saying]:

“The criterion of demarcation cannot be an absolutely sharp one but will itself have degrees. There will be well-testable theories, hardly testable theories, and non-testable theories. Those which are non-testable are of no interest to empirical scientists. They may be described as metaphysical.”

Here, ‘testability’ implies only that a theory either makes contact, or holds some promise of making contact, with empirical evidence. It makes no presumptions about what we might do in light of the evidence. If the evidence verifies the theory, that’s great – we celebrate and start looking for another test. If the evidence fails to support the theory, then we might ponder for a while or tinker with the auxiliary assumptions. Either way, there’s a tension between the metaphysical content of the theory and the empirical data – a tension between the ideas and the facts – which prevents the metaphysics from getting completely out of hand. In this way, the metaphysics is tamed or ‘naturalised’, and we have something to work with. This is science.

But as we’ve seen, many things we really want to include as science are not testable: our credence for real dinosaurs over Satan planting fossils, our credence for Khafre building the Sphinx over Khufu or Atlanteans, or elegant patterns that explain the features of the universe like the Extradimensional-Sphere Theory.

The Aeon article is aware of Carroll’s work – which, along with the paragraph quoted in Section II above, includes a lot of detailed Bayesian reasoning encompassing everything I’ve discussed. But the article dismisses it in a few sentences:

Sean Carroll, a vocal advocate for the Many-Worlds interpretation, prefers abduction, or what he calls ‘inference to the best explanation’, which leaves us with theories that are merely ‘parsimonious’, a matter of judgment, and ‘still might reasonably be true’. But whose judgment? In the absence of facts, what constitutes ‘the best explanation’?

Carroll seeks to dress his notion of inference in the cloth of respectability provided by something called Bayesian probability theory, happily overlooking its entirely subjective nature. It’s a short step from here to the theorist-turned-philosopher Richard Dawid’s efforts to justify the string theory programme in terms of ‘theoretically confirmed theory’ and ‘non-empirical theory assessment’. The ‘best explanation’ is then based on a choice between purely metaphysical constructs, without reference to empirical evidence, based on the application of a probability theory that can be readily engineered to suit personal prejudices.

“A choice between purely metaphysical constructs, without reference to empirical evidence” sounds pretty bad, until you realize he’s talking about the same reasoning we use to determine that real dinosaurs are more likely than Satan planting fossils.

I don’t want to go over the exact ways in which Bayesian methods are subjective (which I think are overestimated) vs. objective. I think it’s more fruitful to point out that your brain is already using Bayesian methods to interpret the photons striking your eyes into this sentence, to make snap decisions about what sense the words are used in, and to integrate them into your model of the world. If Bayesian methods are good enough to give you every single piece of evidence about the nature of the external world that you have ever encountered in your entire life, I say they’re good enough for science.

Or if you don’t like that, you can use the explanation above, which barely uses the word “Bayes” at all and just describes everything in terms like “Occam’s Razor” and “you wouldn’t want to conclude something like that, would you?”

I know there are separate debates about whether this kind of reasoning-from-simplicity is actually good enough, when used by ordinary people, to consistently arrive at truth. Or whether it’s a productive way to conduct science that will give us good new theories, or a waste of everybody’s time. I sympathize with some these concerns, though I am nowhere near scientifically educated enough to have an actual opinion on the questions at play.

But I think it’s important to argue that even before you describe the advantages and disadvantages of the complicated Bayesian math that lets you do this, something like this has to be done. The untestable is a fundamental part of science, impossible to remove. We can debate how to explain it. But denying it isn’t an option.

New Comment
27 comments, sorted by Click to highlight new comments since:

...I think the correct response is to say that both theories explain the data, and one cannot empirically test which theory is true, but the paleontology theory is more elegant (I am tempted to say “simpler”, but that might imply I have a rigorous mathematical definition of the form of simplicity involved, which I don’t).

The concept Scott seems to be looking for is "lower Kolmogorov complexity". Well, there might be debate about whether Kolmogorov complexity is exactly the right metric, but it seems clearly a vast improvement over having no mathematical definition.

...there is no unambiguous algorithm according to which we can feed in these criteria, a list of theories, and a set of data, and expect the best theory to pop out. The way in which we judge scientific theories is inescapably reflective, messy, and human. That’s the reality of how science is actually done; it’s a matter of judgment, not of drawing bright lines between truth and falsity or science and non-science.

Carrol's position seems much too pessimistic, giving up without even trying. Why "inescapably"? Before Newton someone might have said, the way to guess the trajectory of a falling rock in inescapably messy and human. Now we know how to describe physics by mathematical equations but not metaphysics. This is not a state of affairs we should just accept.

Algorithmic information theory and AI theory show a clear path towards formalizing metaphysics. I think it is entirely plausible that in the future we will have tools for rigorously comparing scientific theories. Perhaps in cases such as Atlantis a fully rigorous analysis would still be intractable, because of the "messy" domain, but when comparing competing theories of fundamental physics I see no reason why it can't be done. Even in the messier cases having a rigorous theory should lead to tools for making comparison less subjective.

[-]TAG10

Algorithmic information theory and AI theory show a clear path towards formalizing metaphysics.

If you define metaphysics as being concerned with deciding between natural and supernatural explanations, the techniques we currently have that are based algorithmic complexity aren't doing a great job.

The problem is that our standard notions of computational limits is based on physical limitations - - the topic of hypercomputation deals, with the computation that might be possible absent physical limits - - so there is a question begging assumption of physicalism built in.

I don't think hypercomputation is an issue for algorithmic information theory as foundation for metaphysics/induction. The relevant question is, not whether the world contains hypercomputation, but whether our mind is capable of hypercomputation. And here it seems to me like the answer is "no". Even if the answer was "yes", we could probably treat the hypercomputing part of the mind as part of the environment. I wrote a little about it here.

[-]TAG10

Since the topic is metaphysics , and metaphysics is about what reality really is, the relevant question is whether the world contains hypercomputation.

Well, I am a "semi-instrumentalist": I don't think it is meaningful to ask what reality "really is" except for the projection of the reality on the "normative ontology".

[-]TAG10

But you still don't have an apriori guarantee that a computable model will succeed--that doesn't follow from the claim that the human mind operated within computable limits. You could be facing evidence that all computable models must fail, in which case you should adopt a negative belief about physical/naturalism, even if you don't adopt a positive belief in some supernatural model.

Well, you don't have a guarantee that a computable model will succeed, but you do have some kind of guarantee that you're doing your best, because computable models is all you have. If you're using incomplete/fuzzy models, you can have a "doesn't know anything" model in your prior, which is a sort of "negative belief about physical/naturalism", but it is still within the same "quasi-Bayesian" framework.

The sword against this particular Gordian knot is “beliefs are for actions”.

It does help ground things but isn't a full accounting on the philosophy of science side since your decision model has ontological commitments.

"There is no view from nowhere." Your mind was created already in motion and thinks, whether you want it to or not, and whatever ontological assumptions it may start with, it has pragmatically already started with them years before you ever worried about such questions. Your Neurathian raft has already been replaced many times over on the basis of decisions and outcomes.

I’m not sure I follow what you’re saying. What are you suggesting is the specific problem that remains after the question of “should we believe this thing” is addressed via the “beliefs are for actions” approach?

Debates over multiverse theory aside, I have to point out that the example used by the writer for Aeon IS NOT A MULTIVERSE THEORY! It's a theory of dark matter. Are we now calling a universe with dark matter, a multiverse? Maybe the electromagnetic spectrum is a multiverse too: there's the X-ray-verse, the gamma-ray-verse, the infrared-verse...

Again, Bayesians would start with a very low prior for Atlantis, and assess the evidence as very low, and end up with a probability distribution something like Khafre 80%, Khufu 19.999999%, Atlantis 0.000001%

This isn't quite how a pure Bayesian analysis would work. We should end up with higher probability for Khafre/Khufu, even if the prior starts with comparable weight on all three.

We want to calculate the probability that the sphinx was built by Atlanteans, given the evidence: P[atlantis | evidence]. By Bayes' rule, that's proportional to P[evidence | atlantis] times the prior P[atlantis]. Let's just go ahead and fix the prior at 1/3 for the sake of exposition, so that the heavy lifting will be done by P[evidence | atlantis].

The key piece: what does P[evidence | atlantis] mean? If the new-agers say "ah, the Atlantis theory predicts all of this evidence perfectly", does that mean that P[evidence | atlantis] is very high? No, because we expect that the new-agers would have said that regardless of what evidence was found. A theory cannot assign high probability to all possible evidence, because the theory's evidence-distribution must sum to one. To properly compute P[evidence | atlantis], we have to step back and ask "before seeing this evidence, what probability would I assign it, assuming the sphinx was actually built by Atlanteans?"

What matters most for computing P[evidence | atlantis] is that the Atlantis theory puts nonzero probability on all sorts of unusual hypothetical evidence-scenarios. For instance, if somebody ran an ultrasound on the sphinx, and found that it contained pure aluminum, or a compact nuclear reactor, or a cavity containing tablets with linear A script on them, or anything else that Egyptians would definitely not have put in there... the Atlantis theory would put nonzero probability on all those crazy possibilities. But there's a lot of crazy possibilities, and allocating probability to all of them means that there can't be very much left for the boring possibilities - remember, it all has to add up to one, so we're on a limited probability budget here. On the other hand, Khafre/Khufu both assign basically-zero probability to all the crazy possibilities, which leaves basically their entire probability budget on the boring stuff.

So when the evidence actually ends up being pretty boring, P[evidence | atlantis] has to be a lot lower than P[evidence | khafre] or P[evidence | khufu].

I feel like this is mostly a question of what you mean with "atlantis".

  • If you want to calculate P(evidence | the_specific_atlantis_that_newagers_specified_after_hearing_the_evidence) * P(the_specific_atlantis_that_newagers_specified_after_hearing_the_evidence), then the first term is going to be pretty high, and the second term would be very low (because it specifies a lot of things about what the atleantans did).
  • But if you want to calculate P(evidence | the_type_of_atlantis_that_people_mostly_associate_to_before_thinking_about_the_sphinx) * P(the_type_of_atlantis_that_people_mostly_associate_to_before_thinking_about_the_sphinx), the first term would be very low, while the second term would be somewhat higher.

The difference between the two cases is whether you think about the new agers as holding exactly one hypothesis and lying about what it predicts (as it cannot assign high probability to all of the things, since you're correct that the different probabilities must sum to 1), or whether you think about the new agers as switching to a new hypothesis every time they discover a new fact about the sphinx / every time they're asked a new question.

In this particular article, Scott mostly wants to make a point about cases where theories have similar P(E|T) but differ in the prior probabilities, so he focused on the first case.

Ah, I see. Thanks.

Regarding the particles with mass-ratios following sphere integrals: this isn't quite an analogy for many worlds. Particle masses following that sort of pattern is empirically testable: as particle mass measurements gain precision over time, the theory would predict that their mass ratios continue to match the pattern. Many worlds is a different beast: it is mathematically equivalent to other interpretations of quantum mechanics. The different interpretations provably make the same predictions for everything, always. They use the same exact math. The only difference is interpretation.

I don't think that the QM example is like the others. Explaining this requires a bit of detail.

From section V.:

My understanding of the multiverse debate is that it works the same way. Scientists observe the behavior of particles, and find that a multiverse explains that behavior more simply and elegantly than not-a-multiverse.

That's not an accurate description of the state of affairs.

In order to calculate correct predictions for experiments, you have to use the probabilistic Born rule (and the collapse postulate for sequential measurements). That these can be derived from the Many Worlds interpretation (MWI) is a conjecture which hasn't been proved an a universally accepted way.

So we have an interpretation which works but is considered unelegent by many and we have an interpretation which is simple and elegant but is only conjectured to work. Considering the nature of the problems with the proofs, it is questionable whether the MWI can retain its elegant simplicity if it is made to work (see below).

One (doubtless exaggerated) way I’ve heard multiverse proponents explain their position is like this: in certain situations the math declares two contradictory answers – in the classic example, Schrodinger’s cat will be both alive and dead. But when we open the box, we see only a dead cat or an alive cat, not both. Multiverse opponents say “Some unknown force steps in at the last second and destroys one of the possibility branches”. Multiverse proponents say “No it doesn’t, both possibility branches happen exactly the way the math says, and we end up in one of them.”

What I find interesting is that Copenhagen-style interpretations looked ugly to me at first but got more sensible the more I learned about them. With most other interpretations it is the reverse: initially, the looked very compelling but the intuitive pictures are often hard to make rigorous. For example, if you try to describe the branching process mathematically, it isn't possible to say when exactly the branches are splitting or even that they are splitting in an unambiguous way at all. Without introducing something like the observer who sets a natural scale for when it is okay to approximate certain values by zero, it is very difficlt to way to speak of different worlds consistently. But then the simplicity of the MWI is greatly reduced and the difference to a Copenhagenish point of view is much more subtle.

Generally, regarding the interpretation of QM, there are two camps: realists who take the wave function as a real physical object (Schrödinger, Bohm, Everett) and people who take the wavefunction as an object of knowledge (Bohr, Einstein, Heisenberg, Fuchs).

If the multiverse opponent describes the situation involving "some unknown force" he is also in the realist camp and not a proponent of a Copenhagenish position. The most modern Copenhagenish position would be QBism which asserts "whenever I learn something new by means of a measurement, I update". From this point of view, QM is a generalization of probability theory, the wavefunction (or probability amplitude) is the object of knowledge which replaces ordinary probabilities, and the collapse rule is a generalized form of Bayesian updating. That doesn't seem less sensible to me than your description of the multiverse proponent. Of course, there's also a bullet to bite here: the abandonment of a mathematical layer below the level of (generalized) probabilities.

The important point is that this is not about which position is simpler than the other but about a deep divide in the philosophical underpinnings of science.

Taking this exaggerated dumbed-down account as exactly right, this sounds about as hard as the dinosaurs-vs-Satan example, in terms of figuring out which is more Occam’s Razor compliant. I’m sure the reality is more nuanced, but I think it can be judged by the same process. Perhaps this is the kind of reasoning that only gets us to a 90% probability there is a multiverse, rather than a 99.999999% one. But I think determining that theories have 90% probability is a reasonable scientific thing to do.

As per what I have written above I think that there's a crucial difference between the examples of the fossils and the sphinx on the one hand and the interpretation of QM on the other hand. Which interpretation of QM one prefers is connected to one's position on deep philosophical questions like "Is reductionism true?", "Is Nature fundamentally mathematical?", "What's consiousness?", etc. So the statement "[there's a] 90% probability there is a multiverse" is connected to statements of the form "there's a 90% probability that reductionism is true". Whether such statements are meaningful seems much more questionable to me than in the case of your other examples.

Generally, regarding the interpretation of QM, there are two camps: realists who take the wave function as a real physical object (Schrödinger, Bohm, Everett) and people who take the wavefunction as an object of knowledge (Bohr, Einstein, Heisenberg, Fuchs).

Einstein was a realist who was upset that the only interpretation available to him was anti-realist. Saying that he took the wavefunction as object of knowledge is technically true, ie, false.

a Copenhagenish position

Thanks for conceding that the Copenhagen interpretation has meant many things. Do you notice how many people deny that? It worries me.

Einstein was a realist who was upset that the only interpretation available to him was anti-realist. Saying that he took the wavefunction as object of knowledge is technically true, ie, false.

I agree that my phrasing was a bit misleading here. Reading it again, it sounds like Einstein wasn't a realist, which of course is false. For him, QM was a purely statistical theory which needed to be supplemented by a more fundamental realistic theory (a view which has been proven to be untenable only in 2012 by Pusey, Barrett and Rudolph).

Thanks for conceding that the Copenhagen interpretation has meant many things. Do you notice how many people deny that? It worries me.

I don't know how many people really deny this. Sure, people often talk about "the" Copenhagen interpretation but most physicists use it only as a vague label because they don't care much about interpretations. Who do you have in mind denying this and what exactly worries you?

[-]TAG10

The most modern Copenhagenish position would be QBism which asserts “whenever I learn something new by means of a measurement, I update”.

There's no doubt a story as to why QBism didn't become the official LessWrong position.

I think so, too, but I don't know it (Eliezer's Sequence on QM is still on my reading list). Given the importance people around here put on Bayes theorem, I find it quite surprising that the idea of a quantum generalization -which is what QBism is about- isn't discussed here apart from a handful of isolated comments. Two notable papers in this direction are

https://arxiv.org/abs/quant-ph/0106133

https://arxiv.org/abs/0906.2187

[+]TAG-70

There might be more to "naive Popperianism" than you're making out. The testability criteria are not only intended to demarcate science, but also meaning. Of course, if it turns out that two theories which at first glance seem quite different do in fact mean the same thing, discussing "which one" is true is a category error. This idea is well-expressed by Wittgenstein:

Scepticism is not irrefutable, but palpably senseless, if it would doubt where a question cannot be asked. For doubt can only exist where there is a question; a question only where there is an answer, and this only where something can be said.

Now, there are problems with the falsification criterion, but the more general idea that our beliefs should pay rent is a valuable one. We might think then, that the meaning of a theory is determined by the rent it can pay, and that of unobservables by their relation to the observables. A natural way to formalize this is the Ramsey-sentence:

Say you have terms for observables, , and for theoretical entities that aren't observable, By repeated conjunction, you can combine all claims of the theory into a single sentence,

The Ramsey-sentence then is the claim that

If this general idea of meaning is correct, then the Ramsey-sentence fully encompasses the meaning of the theory. If this sounds sensible so far, then the competing theories discussed here do indeed have the same meaning. The Ramsey-sentence is logically equivalent to the observable consequences of the theory being true. This is because according to the extensional characterisation of relations defined on a domain of individuals, every relation is identified with some set of subsets of the domain. The power set axiom entails the existence of every such subset and hence every such relation.

Of course, Satan and the Atlanteans are in fact motte-and-bailey arguments, and the person advocating them is likely to make claims about them on subjects other then paleontology or the pyramids that do imply different observations then the scientific consensus, and as such render their theory false straightforwardly.

[This comment is no longer endorsed by its author]Reply

I'm not sure that works. Imagine there are two theories about a certain button:

  1. Pressing the button gives you a dollar and does nothing else

  2. Pressing the button gives you a dollar and also causes an unobservable person to suffer

These theories give the same observations, but recommend different actions, so to me they seem to have different meanings.

[-]TAG20

The testability criteria are not only intended to demarcate science, but also meaning.

Intended by Popper? What you are saying sounds much more like logical positivism.

Seems like you're right. I don't think it effects my argument though, just the name.

[-]TAG10

I don't think the argument is strong whatever you call it.