• 0 Posts
  • 10 Comments
Joined 2 months ago
cake
Cake day: July 7th, 2024

help-circle
  • Schrödinger was not “rejecting” quantum mechanics, he was rejecting people treating things described in a superposition of states as literally existing in “two places at once.” And Schrödinger’s argument still holds up perfectly. What you are doing is equating a very dubious philosophical take on quantum mechanics with quantum mechanics itself, as if anyone who does not adhere to this dubious philosophical take is “denying quantum mechanics.” But this was not what Schrödinger was doing at all.

    What you say here is a popular opinion, but it just doesn’t make any sense if you apply any scrutiny to it, which is what Schrödinger was trying to show. Quantum mechanics is a statistical theory where probability amplitudes are complex-valued, so things can have a -100% chance of occurring, or even a 100i% chance of occurring. This gives rise to interference effects which are unique to quantum mechanics. You interpret what these probabilities mean in physical reality based on how far they are away from zero (the further from zero, the more probable), but the negative signs allow for things to cancel out in ways that would not occur in normal probability theory, known as interference effects. Interference effects are the hallmark of quantum mechanics.

    Because quantum probabilities have this difference, some people have wondered if maybe they are not probabilities at all but describe some sort of physical entity. If you believe this, then when you describe a particle as having a 50% probability of being here and a 50% probability of being there, then this is not just a statistical prediction but there must be some sort of “smeared out” entity that is both here and there simultaneously. Schrödinger showed that believing this leads to nonsense as you could trivially set up a chain reaction that scales up the effect of a single particle in a superposition of states to eventually affect a big system, forcing you to describe the big system, like a cat, in a superposition of states. If you believe particles really are “smeared out” here and there simultaneously, then you have to believe cats can be both “smeared out” here and there simultaneously.

    Ironically, it was Schrödinger himself that spawned this way of thinking. Quantum mechanics was originally formulated without superposition in what is known as matrix mechanics. Matrix mechanics is complete, meaning, it fully makes all the same predictions as traditional quantum mechanics. It is a mathematically equivalent theory. Yet, what is different about it is that it does not include any sort of continuous evolution of a quantum state. It only describes discrete observables and how they change when they undergo discrete interactions.

    Schrödinger did not like this on philosophical grounds due to the lack of continuity. There were discrete “gaps” between interactions. He criticized it saying that “I do not believe that the electron hops about like a flea” and came up with his famous wave equation as a replacement. This wave equation describes a list of probability amplitudes evolving like a wave in between interactions, and makes the same predictions as matrix mechanics. People then use the wave equation to argue that the particle literally becomes smeared out like a wave in between interactions.

    However, Schrödinger later abandoned this point of view because it leads to nonsense. He pointed in one of his books that while his wave equation gets rid of the gaps in between interactions, it introduces a new gap in between the wave and the particle, as the moment you measure the wave it “jumps” into being a particle randomly, which is sometimes called the “collapse of the wave function.” This made even less sense because suddenly there is a special role for measurement. Take the cat example. Why doesn’t the cat’s observation of this wave not cause it to “collapse” but the person’s observation does? There is no special role for “measurement” in quantum mechanics, so it is unclear how to even answer this in the framework of quantum mechanics.

    Schrödinger was thus arguing to go back to the position of treating quantum mechanics as a theory of discrete interactions. There are just “gaps” between interactions we cannot fill. The probability distribution does not represent a literal physical entity, it is just a predictive tool, a list of probabilities assigned to predict the outcome of an experiment. If we say a particle has a 50% chance of being here or a 50% chance of being there, it is just a prediction of where it will be if we were to measure it and shouldn’t be interpreted as the particle being literally smeared out between here and there at the same time.

    There is no reason you have to actually believe particles can be smeared out between here and there at the same time. This is a philosophical interpretation which, if you believe it, it has an enormous amount of problems with it, such as what Schrödinger pointed out which ultimately gets to the heart of the measurement problem, but there are even larger problems. Wigner had also pointed out a paradox whereby two observers would assign different probability distributions to the same system. If it is merely probabilities, this isn’t a problem. If I flip a coin and look at the outcome and it’s heads, I would say it has a 100% chance of being heads because I saw it as heads, but if I asked you and covered it up so you did not see it, you would assign a 50% probability of it being heads or tails. If you believe the wave function represents a physical entity, then you could setup something similar in quantum mechanics whereby two different observers would describe two different waves, and so the physical shape of the wave would have to differ based on the observer.

    There are a lot more problems as well. A probability distribution scales up in terms of its dimensions exponentially. With a single bit, there are two possible outcomes, 0 and 1. With two bits, there’s four possible outcomes, 00, 01, 10, and 11. With three bits, eight outcomes. With four bits, sixteen outcomes. If we assign a probability amplitude to each possible outcome, then the number of degrees of freedom grows exponentially the more bits we have under consideration.

    This is also true in quantum mechanics for the wave function, since it is again basically a list of probability amplitudes. If we treat the wave function as representing a physical wave, then this wave would not exist in our four-dimensional spacetime, but instead in an infinitely dimensional space known as a Hilbert space. If you want to believe the universe actually physically made up of infinitely dimensional waves, have at ya. But personally, I find it much easier to just treat a probability distribution as, well, a probability distribution.


  • It is weird that you start by criticizing our physical theories being descriptions of reality then end criticizing the Copenhagen interpretation, since this is the Copenhagen interpretation, which says that physics is not about describing nature but describing what we can say about nature. It doesn’t make claims about underlying ontological reality but specifically says we cannot make those claims from physics and thus treats the maths in a more utilitarian fashion.

    The only interpretation of quantum mechanics that actually tries to interpret it at face value as a theory of the natural world is relational quantum mechanics which isn’t that popular as most people dislike the notion of reality being relative all the way down. Almost all philosophers in academia define objective reality in terms of something being absolute and point-of-view independent, and so most academics struggle to comprehend what it even means to say that reality is relative all the way down, and thus interpreting quantum mechanics as a theory of nature at face-value is actually very unpopular.

    All other interpretations either: (1) treat quantum mechanics as incomplete and therefore something needs to be added to it in order to complete it, such as hidden variables in the case of pilot wave theory or superdeterminism, or a universal psi with some underlying mathematics from which to derive the Born rule in the Many Worlds Interpretation, or (2) avoid saying anything about physical reality at all, such as Copenhagen or QBism.

    Since you talk about “free will,” I suppose you are talking about superdeterminism? Superdeterminism works by pointing out that at the Big Bang, everything was localized to a single place, and thus locally causally connected, so all apparent nonlocality could be explained if the correlations between things were all established at the Big Bang. The problem with this point of view, however, is that it only works if you know the initial configuration of all particles in the universe and a supercomputer powerful to trace them out to modern day.

    Without it, you cannot actually predict any of these correlations ahead of time. You have to just assume that the particles “know” how to correlate to one another at a distance even though you cannot account for how this happens. Mathematically, this would be the same as a nonlocal hidden variable theory. While you might have a nice underlying philosophical story to go along with it as to how it isn’t truly nonlocal, the maths would still run into contradictions with special relativity. You would find it difficult to construe the maths in such a way that the hidden variables would be Lorentz invariant.

    Superdeterministic models thus struggle to ever get off the ground. They only all exist as toy models. None of them can reproduce all the predictions of quantum field theory, which requires more than just accounting for quantum mechanics, but doing so in a way that is also compatible with special relativity.


  • It is only continuous because it is random, so prior to making a measurement, you describe it in terms of a probability distribution called the state vector. The bits 0 and 1 are discrete, but if I said it was random and asked you to describe it, you would assign it a probability between 0 and 1, and thus it suddenly becomes continuous. (Although, in quantum mechanics, probability amplitudes are complex-valued.) The continuous nature of it is really something epistemic and not ontological. We only observe qubits as either 0 or 1, with discrete values, never anything in between the two.


  • Quantum mechanics explains a range of phenomena that cannot be understood using the intuitions formed by everyday experience. Recall the Schrödinger’s cat thought experiment, in which a cat exists in a superposition of states, both dead and alive. In our daily lives there seems to be no such uncertainty—a cat is either dead or alive. But the equations of quantum mechanics tell us that at any moment the world is composed of many such coexisting states, a tension that has long troubled physicists.

    No, this is a specific philosophical interpretation of quantum mechanics. It requires treating the wave function as a literal autonomous entity that actually describes the object. This is a philosophical choice and is not demanded by the theory itself.

    The idea that two fundamental scientific mysteries—the origin of consciousness and the collapse of what is called the wave function in quantum mechanics—are related, triggered enormous excitement.

    The “origin of consciousness” is not a “scientific mystery.” Indeed, how the brain works is a scientific mystery, but “consciousness” is just something philosophers cooked up that apparently everything we perceive is an illusion (called “consciousness”) created by the mammalian brain that is opposed to some “true reality” that is entirely invisible and beyond the veil of this illusion and has no possibility of ever being observed.

    People like David Chalmers rightfully pointed out that if you believe this, then it seems like a mystery as to how this invisible “true reality” can “give rise to” the reality we actually experience and are immersed in every day. But these philosophers have simply failed to provide a compelling argument as to why the reality we perceive is an illusion created by the brain in the first place.

    Chalmers doesn’t even bother to justify it, he just cites Thomas Nagel who says that experience is “conscious” and “subjective” because true reality is absolute (point-of-view independent) and the reality we experience is relative (point-of-view dependent), and therefore it cannot be objective reality as it exists but must be a product of the mammalian brain. Yet, if the modern sciences has shown us anything, it is that reality is absolutely not absolute but is relative to its core.

    Penrose’s argument is even more bizarre, he claims that because we can believe things that cannot be mathematically proven, our brains can do things which are not computable, and thus there must be some relationship between the brain and the outcome of measurements in quantum mechanics in which no computation can predict them beforehand. Yet, it is just a bizarre argument. Humans can believe things that can’t be proven because humans only operate on confidence levels. If you see enough examples to be reasonably confident the next will follow the same pattern, you can believe it. This is just called induction and nothing is preventing you from putting it into a computer.

    According to Penrose, when this system collapses into either 0 or 1, a flicker of conscious experience is created, described by a single classical bit.

    Penrose, like most philosophers never convincingly justifies that experience is “conscious”.

    However, per Penrose’s proposal, qubits participating in an entangled state share a conscious experience. When one of them assumes a definite state, we could use this to establish a communication channel capable of transmitting information faster than the speed of light, a violation of special relativity.

    Here he completely goes off the rails and proposes something that goes against the scientific consensus for no clear reason. Why does his “theory” even need faster-than-light communication? How does proposing superluminal signaling help explain “consciousness”? All it does is make the theory trivially false since it cannot reproduce the predictions of experiments.

    In our view, the entanglement of hundreds of qubits, if not thousands or more, is essential to adequately describe the phenomenal richness of any one subjective experience: the colors, motions, textures, smells, sounds, bodily sensations, emotions, thoughts, shards of memories and so on that constitute the feeling of life itself.

    Now the author themselves is claiming experience is “subjective” yet does not justify it, like all sophists on this topic, they just always begin from the premise that we do not perceive reality as it is but some subjective illusion and rarely try to even justify it. That aside, they are also abusing terminology. Colors, motions, textures, smells, etc, these are not experiences but abstract categories. We can talk about the experience of the color red, but we can also talk about the experience of a rainbow, or an amusement park. Are amusement parks “subjective experiences”? No, it’s an abstract category.

    Abstract categories are normative constructs used to identify something within an experience, but are they not experiences themselves. You have an experience, and then you interpret that experience to be something. This process of interpretation and identification is not the same as the experience itself. Reality just is what it is. It is not blue or red, it is not a rainbow or an amusement park, it just is. These are socially constructed labels we apply to it.

    Sophists love to demarcate the objects of “qualia,” like red or green or whatever, as somehow “special” over any other category of objects, such as trees, rocks, rainbows, amusement parks, atoms, Higgs bosons, etc. Yet, they can never tell you why. They just insist they are special… somehow. All abstract categories are socially constructed norms used to identify aspects of reality. They are all shared concepts precisely because they are socially constructed: we are all taught to identify them in the same way. We are all shown something red and told “this is red.” Two people may be physically different and thus this “red” has different impacts on them, no matter how different it is, they both learn to associate their real experience with the same word, and thus it becomes shared.

    This is true for everything. Red, dogs, trees, cats, atoms, etc. There is no demarcation between them.

    In an article published in the open-access journal Entropy, we and our colleagues turned the Penrose hypothesis on its head, suggesting that an experience is created whenever a system goes into a quantum superposition rather than when it collapses. According to our proposal, any system entering a state with one or more entangled superimposed qubits will experience a moment of consciousness.

    This is what passes for “science” these days. Metaphysical realism has really poisoned people’s minds.

    The definitiveness of any conscious experience naturally arises within the many-worlds interpretation of quantum mechanics.

    Another piece of sophistry that originates from some physicists simply disliking the Born rule, declaring it mathematically ugly, so they try to invent some underlying story from which it can be derived that would be more mathematically beautiful. However, this underlying story is not derived from anything we can observe, so there is no possible way to agree upon what it even is. There are dozens of proposals and no way to choose between them. There simply is not “the” many-worlds interpretation. There is many many-worlds interpretations.

    To make these esoteric ideas concrete, we propose three experiments that would increasingly shape our thinking on these matters.

    All the experiments proposed deal with observing the behavior of living organisms, which is irrelevant to the topic at hand.


  • Reading books on natural philosophy. By that I mean, not mathematics of the physics itself, but what do the mathematics actually tell us about the natural world, how to interpret it and think about it, on a more philosophical level. Not a topic I really talk to many people irl on because most people don’t even know what the philosophical problems around this topic. I mean, I’d need a whole whiteboard just to walk someone through Bell’s theorem to even give them an explanation to why it is interesting in the first place. There is too much of a barrier of entry for casual conversation.

    You would think since natural philosophy involves physics that it would not be niche because there are a lot of physicists, but most don’t care about the topic either. If you can plug in the numbers and get the right predictions, then surely that’s sufficient, right? Who cares about what the mathematics actually means? It’s a fair mindset to have, perfectly understandable and valid, but not part of my niche interests, so I just read tons and tons and tons of books and papers regarding a topic which hardly anyone cares. It is very interesting to read like the Einstein-Bohr debates, or Schrodinger for example trying to salvage continuity viewing a loss of continuity as a breakdown in classical notion of causality, or some of the contemporary discussions on the subject such as Carlo Rovelli’s relational quantum mechanics or Francois-Igor Pris’ contextual realist interpretation. Things like that.

    It doesn’t even seem to be that popular of a topic among philosophers, because most don’t want to take the time to learn the math behind something like Bell’s theorem (it’s honestly not that hard, just a bit of linear algebra). So as a topic it’s pretty niche but I have a weird autistic obsession over it for some reason. Reading books and papers on these debates contributes nothing at all practically beneficial to my life and there isn’t a single person I know outside of online contacts who even knows wtf I’m talking about but I still find it fascinating for some reason.


  • You should look into contextual realism. You might find it interesting. It is a philosophical school from the philosopher Jocelyn Benoist that basically argues that the best way to solve most of the major philosophical problems and paradoxes (i.e. mind-body problem) is to presume the natural world is context variant all the way down, i.e. there simply is no reality independent of specifying some sort of context under which it is described (kind of like a reference frame).

    The physicist Francois-Igor Pris points out that if you apply this thinking to quantum mechanics, then the confusion around interpreting it entirely disappears, because the wave function clearly just becomes a way of accounting for the context under which an observer is observing a system, and that value definiteness is just a context variant property, i.e. two people occupying two different contexts will not always describe the system as having the same definite values, but may describe some as indefinite which the other person describes as definite.

    “Observation” is just an interaction, and by interacting with a system you are by definition changing your context, and thus you have to change your accounting for your context (i.e. the wave function) in order to make future predictions. Updating the wave function then just becomes like taring a scale, that is to say, it is like re-centering or “zeroing” your coordinate system, and isn’t “collapsing” anything physical. There is no observer-dependence in the sense that observers are somehow fundamental to nature, only that systems depend upon context and so naturally as an observer describing a system you have to take this into account.


  • Quantum mechanics is incompatible with general relativity, it is perfectly compatible with special relativity, however. I mean, that is literally what quantum field theory is, the unification of special relativity and quantum mechanics into a single framework. You can indeed integrate all aspects of relativity into quantum mechanics just fine except for gravity. It’s more that quantum mechanics is incompatible with gravity and less that it is incompatible with relativity, as all the other aspects we associate with relativity are still part of quantum field theory, like the passage of time being relative, relativity of simultaneity, length contraction, etc.




  • The traditional notion of cause and effect is not something all philosophers even agree upon, I mean many materialist philosophers largely rejected the notion of simple cause-and-effect chains that go back to the “first cause” since the 1800s, and that idea is still pretty popular in some eastern countries.

    For example, in China they teach “dialectical materialist” philosophy part of required “common core” in universities for any degree, and that philosophical school sees cause and effect as in a sense dependent upon point of view, that an effect being described as a particular cause is just a way of looking at things, and the same relationship under a different point of view may in fact reverse what is considered the cause and the effect, viewing the effect as the cause and vice-versa. Other points of view may even ascribe entirely different things as the cause.

    It has a very holistic view of the material world so there really is no single cause to any effect, so what you choose to identify as the cause is more of a label placed by an individual based on causes that are relevant to them and not necessarily because those are truly the only causes. In a more holistic view of nature, Laplacian-style determinism doesn’t even make sense because it implies nature is reducible down to separable causes which can all be isolated from the rest and their properties can then be fully accounted for, allowing one to predict the future with certainty.

    However, in a more holistic view of nature, it makes no sense to speak of the universe being reducible to separable causes as, again, what we label as causes are human constructs and the universe is not actually separable. In fact, the physicists Dmitry Blokhintsev had written a paper in response to a paper Albert Einstein wrote criticizing Einstein’s distaste for quantum mechanics as based on his adherence to the notion of separability which stems from Newtonian and Kantian philosophy, something which dialectical materialists, which Blokhintsev self-identified as, had rejected on philosophical grounds.

    He wrote this paper many many years prior to the publication of Bell’s theorem which showed that giving up on separability (and by extension absolute determinism) really is a necessity in quantum mechanics. Blokhintsev would then go on to write a whole book called The Philosophy of Quantum Mechanics where in it he argues that separability in nature is an illusion and under a more holistic picture absolute determinism makes no sense, again, purely from materialistic grounds.

    The point I’m making is ultimately just that a lot of the properties people try to ascribe to “materialists” or “naturalists” which then later try to show quantum mechanics is in contradiction with, they seem to forget that these are large umbrella philosophies with many different sects and there have been materialist philosophers criticizing absolute determinism as even being a meaningful concept since at least the 1800s.