Ben

Physicist and dabbler in writing fantasy/science fiction.

Wiki Contributions

Comments

Ben3d21

You are right.

I thought the whole idea with the naming was that the convention whereby "twelve is written 12" the symbol at the end "2" is the one symbolising the littlest bit, so I thought it was called "little endian" for that reason. 

I now I have a lot of questions about how the names were chosen (to wikipedia!). It seems really backwards.

Ben3d115

How does a little endian do a decimal point? Do they put the fractional part of the number at the beginning (before the decimal) and the integer part afterwards? Eg.  123.456  becomes  654.321?  So just like all integers in big-endian notation can be imaged to have a trailing ".0" they can all be imagined to have a leading "0." in little-endian?

The way we do it currently has the nice feature that the powers of 10 keep going in the same direction (smaller) through a decimal point. To maintain this feature a little-endian requires that everything before the decimal point is the sub-integer component. Which has the feature lsusr doesn't like that if we are reading character by character the decimal forces us to re-interpret all previous characters.

[Edited to get the endians the right way around]

Ben7d20

Very interesting. It sounds like your "third person view from nowhere" vs the "first person view from somewhere" is very similar to something I was thinking about recently. I called them "objectively distinct situations" in contrast with "subjectively distinct situations". My view is that most of the anthropic arguments that "feel wrong" to me are built on trying to make me assign equal probability to all subjectively distinct scenarios, rather than objective ones. eg. A replication machine makes it so there are two of me, then "I" could be either of them, leaving two subjectively distinct cases, even if on the object level there is actual no distinction between "me" being clone A or clone B. [1]

I am very sceptical of this ADT. If you think the time/place you have ended up is unusually important I think that is more likely explained by something like "people decide what is important based on what is going on around them".

 

[1] My thoughts are here: https://www.lesswrong.com/posts/v9mdyNBfEE8tsTNLb/subjective-questions-require-subjective-information

Ben8d20

I am having trouble following you. If little-omega is a reference frame I would expect it to be a function that takes in the "objective world" (Omega) and spits out a subjective one. But you seem to have it the other way around? Or am I misunderstanding?

Ben8d1114

I would guess that Lorentz's work on deterministic chaos does not get many counterfactual discovery points. He noticed the chaos in his research because of his interactions with a computer doing simulations. This happened in 1961. Now, the question is, how many people were doing numerical calculations on computer in 1961? It could plausibly have been ten times as many by 1970. A hundred times as many by 1980? Those numbers are obviously made up but the direction they gesture in is my point. Chaos was a field that was made ripe for discovery by the computer. That doesn't take anything away from Lorentz's hard work and intelligence, but it does mean that if he had not taken the leap we can be fairly confident someone else would have. Put another way: If Lorentz is assumed to have had a high counterfactual impact, then it becomes a strange coincidence that chaos was discovered early in the history of computers.

Ben13d20

I don't think the negative correlation between doctors and patients opinion of the drugs is surpassing.

Rat poison would probably get a low score from both doctors and patients. However, nobody is being prescribed rat poison as an anti-depressant so it doesn't appear in your data. Why is nobody being prescribed rat poison? Well, doctors don't prescribe it because they think its a bad idea, and patients don't want it anyway.

In order for any drug to appear in your dataset somebody has to think it is good. So every drug should have net-approval from at least one out of the doctors and patients. Given this backdrop a negative correlation is not surprising.

Answer by BenApr 19, 202452

I think you are slightly muddling your phrases.

You are richer if you can afford more goods and better goods. But not all goods will necessarily change price in the same direction. Its entirely possible that you can become richer, but that food prices grow faster than your new income. (For example, imagine that your income doubles, that food prices also double, but prices of other things drop so that inflation remains zero. You can afford more non-food stuff, and the same amount of food, so you are richer overall. This could happen even if food prices had gone up faster than your income.)

I think a (slightly cartoony) real life example is servants. Rich people today are richer than rich people in Victorian times, but fewer rich people today (in developed countries) can afford to have servants. This is because the price of hiring servants has gone up faster than the incomes of these rich people. So it is possible for people to get richer overall, while at the same time some specific goods or services become less accessible.

Maybe a more obvious example is rent (or housing in general). A modern computer programmer in Silicon valley could well be paying a larger percentage of their income on housing than a medieval peasant. But, they can afford more of other things than that peasant could.

Ben15d1511

I think it depends on the meaning attached to the word "love". There are two possibilities:

I "love" this, because it brings me benefits. (it is instrumental in increasing my utility function, like chocolate ice cream)

I "love" this, in that I want it to benefit. (Its happiness appears as a parameter in my utility function)

You can have a partner or family member who means one the other or both to you. The striking dementia example from Odd Anon is a case where the dementia makes it so the person's company no longer makes you happy, but you may still be invested in them being happy.

The first one is obviously never going to be unconditional. The second one seems like it could be unconditional in some cases. In that a parent or spouse really wants their child or partner to be happy even if that child or partner is a complete villain. Its not even necessary that they value the child/partner over everything else, only that they maintain a strong-ish preference for the them being happy over not being happy, all else being equal. 

Ben17d20

Imagine you have a machine that flicks a classical coin and then makes either one wavefunction or another based on the coin toss. Your ordinary ignorance of the coin toss, and the quantum stuff with the wavefunction can be rolled together into an object called a density matrix.

There is a one-to-one mapping between density matrices and Wigner functions. So, in fact there are zero redundant parameters when using Wigner functions. In this sense they do one-better than wavefunctions, where the global phase of the universe is a redundant variable. (Density matrices also don't have global phase.)

That is not to say there are no issues at all with assuming that Wigner functions are ontologically fundamental. For one, while Wigner functions work great for continuous variables (eg. position, momentum), Wigner functions for discrete variables (eg. Qubits, or spin) are a mess. The normal approach can only deal with discrete systems in a prime number of dimensions (IE a particle with 3 possible spin states is fine, but 6 is not.). If the number of dimensions is not prime weird extra tricks are needed.

A second issue is that the Wigner function, being equivalent to a density matrix, combines both quantum stuff and the ignorance of the observer into one object. But the ignorance of the observer should be left behind if we were trying to raise it to being ontologically fundamental, which would require some change.

Another issue with "ontologising" the Wigner function is that you need some kind of idea of what those negatives "really mean". I spent some time thinking about "If the many worlds interpretation comes from ontologising the wavefunction, what comes from doing that to the Wigner function?" a few years ago. I never got anywhere.

Ben17d60

Something you and the OP might find interesting is one of those things that is basically equivalent to a wavefunction, but represented in different mathematics is a Wigner function. It behaves almost exactly like a classical probability distribution, for example it integrates up to 1. Bayes rule updates it when you measure stuff. However, in order for it to "do quantum physics" it needs the ability to have small negative patches. So quantum physics can be modelled as a random stochastic process, if negative probabilities are allowed. (Incidentally, this is often used as a test of "quantumness": do I need negative probabilities to model it with local stochastic stuff? If yes, then it is quantum).

If you are interested in a sketch of the maths. Take W to be a completely normal probability distribution, describing what you know about some isolated, classical ,1d system. And take H to be the classical Hamiltonian (IE just a function for the system's energy). Then, the correct way of evolving your probability distribution (for an isolated classical, 1D system) is:


Where the arrows on the derivatives have the obvious effect of firing them either at H or W. The first pair of derivatives in the bracket is Newton's Second law (rate of change of Energy (H) with respect to X is going to turn potential's into Forces, and the rate of change with momentum on W then changes the momentum in proportion to the force), the second term is the definition of momentum (position changes are proportional to momentum).

Instead of going to operators and wavefunctions in Hilbert space, it is possible to do quantum physics by replacing the previous equation with:

Where sin is understood from Taylor series, so the first term (after the hbars/2 cancel) is the same as the first term for classical physics. The higher order terms (where the hbars do not fully cancel) can result in W becoming negative in places even if it was initially all-positive. Which means that W is no longer exactly like a probability distribution, but is some similar but different animal. Just to mess with us the negative patches never get big enough or deep enough for any measurement we can make (limited by uncertainty principle) to have a negative probability of any observable outcome. H is still just a normal function of energy here.

(Wikipedia is terrible for this topic. Way too much maths stuff for my taste: https://en.wikipedia.org/wiki/Moyal_bracket)

Also, the OP is largely correct when they say "destructive interference is the only issue". However, in the language of probability distributions dealing with that involves the negative probabilities above. And once they go negative they are not proper probabilities any more, but some new creature. This, for example, stops us from thinking of them as just our ignorance. (Although they certainly include our ignorance).

Load More