doclabidouille
B 137: QUANTUM AREAS AND VOLUMES
Le 24/11/2017
We’re back to B135 and we’re now going to talk about quantum areas and volumes. We start in dimension 1. We first need to precise what kind of physical dimension we are to work in: if it’s classical dimension 1 then, indeed, there’s a single one; if it’s quantum dimension 1, there are 2 classical ones (remember we need double everything). The “wavy” dimension, also referred to as the “P2-projection”, is assumed to be located “above” the “corpuscular” one.
What does it means, “above”?
We’ll get a much better picture if we now embed ourselves in our much more familiar 3 dimensions of space. Classically, we feel we’re able to move anywhere in the three classical dimensions of space: length, width, height. Still classically, we can understand we extend this to wavelengths, considering anisotropic waves: as those waves do not propagate the same in all 3 directions, we can give them 3 independent wavelengths, one along each direction. The conceptual difficulty arises when we attempt to “glue” these 3 “wavy” dimensions to our 3 “corpuscular ones”, as we generally do not perceive these “extra-3” in current daylife: we can’t move along them, can we? So, a 6-dimensional world doesn’t really speak to us, does it? Even classical. And claiming it’s to be made equivalent to a quantum 3-dimensional one does not clear the situation at all… :) So, by pure convention, we usually assume that these 3 wavy dimensions are located “above” our 3 familiar ones, which is not true: in reality, all 6 stand on an equal footing and we are only limited in our perceptions of space around us (as usual). We cannot visualize non-solid waves, but we still can feel their effects, so that we remain conscious that waves exist, but we still cannot link them to anything “dimensional”. The picture we have of them is that they “undulate through classical 3-space”. But this is only good for classical waves. It does not represent the quantum reality. The quantum reality is 6-dimensional (space) or 8-dimensional (space-time).
This is one thing: additional dimensions. Then, we have the question of areas and surfaces.
A classically-perceived plane is a 2-dimensional space. If we consider a square inside that plane, with side x(0), than its area s(0) = [x(0)]² will always be a non-negative quantity. Negative areas cannot exist in classical space geometry, where they would be interpreted as areas “smaller than a point”, which is an object of null size, and this would lead to an absurdity.
Things are different in a geometry like that of classical space-time or, now, in the quantum. A quantum plane is schematized as a 2-dimensional plane delimited by that “horizontal corpuscular axis” and that “vertical wavy axis”: they’re similar, but not of the same physical nature at all (as the time dimension in special relativity was similar to any of the 3 space dimensions, but not of the same nature at all). If x(ksi) is now the size of a quantum square inside our quantum plane, then its quantum area is to be calculated as:
(1) s(sigma) = [s(0) , sigma] = [x(ksi)]² = {[x(0)]² , 2ksi}
so that,
(2) s(0) = [x(0)]²
remains a non-negative quantity, as the “pure area” of our quantum square, while
(3) sigma = 2ksi
gives the quantum state our quantum area is found in when its side is found in the state ksi.
If a experimenter wants to measure the “corpuscular amount” of s(sigma), he/she’ll measure its P1-projection:
(4) s1(sigma) = [x(0)]²cos(2ksi) = [x1(ksi)]² – [x2(ksi)]²
If he/she wants to measure the “wavy amount”, he/she’ll measure the P2-projection:
(5) s2(sigma) = [x(0)]²sin(2ksi) = 2x1(ksi)x2(ksi)
According to the sector the quantum state sigma (that of the object we’re studying) is in, both projections will be either positively-counted, zero or negatively-counted.
If 0 < sigma < pi/2 (sector I), 0 < ksi < pi/4 (45°), then both s1(sigma) and s2(sigma) will be measured positive.
If pi/2 < sigma < pi (sector II), pi/4 < ksi < pi/2, then s1(sigma) will be measured negative while s2(sigma) will remain positive.
If pi < sigma < 3pi/2 (sector III), pi/2 < ksi < 3pi/4, then both s1(sigma) and s2(sigma) will be measured negative.
And, if 3pi/2 < sigma < 2pi (sector IV), 3pi/4 < ksi < pi, then s1(sigma) will be measured positive while s2(sigma) will be negative.
You’ll have noticed that, opposite to “classical” multiplication, “quantum” multiplication entangles the corpuscular and the wavy projections of the quantum side x(ksi). Despite this, s1(sigma) remains the “corpuscular” projection of the quantum square and s2(sigma), the “wavy” one. The planar representation could therefore lead to easy confusion. The polar representation is much clearer, as it precises no projection, it instead gives the amplitude and the quantum state.
Let’s start from ksi = 0, that’s sigma = 0. Then, s1(0) = [x(0)]² is obviously maximal and corresponds to the value given by classical geometry, while s2(0) = 0, confirming that, from a strictly classical viewpoint, the square is entirely “corpuscular”, since its “wavy side” is reduced to a point. As ksi increases, we go deeper inside the quantum plane, s1(sigma) decreases while s2(sigma) increases: our quantum area acquires more and more “wavy content” and leaves more and more “substantial content”. When ksi reaches 45°, sigma = 90°, we stand on the P2-axis, s1(sigma) = 0 and s2(sigma) = [x(0)]² is now maximal: a classical P2-observer would come to the same conclusion as our previous P1-observer.
Let’s keep on increasing ksi. Then, sigma becomes greater than 90°, we change sector on the quantum plane, the “wavy content” of x(ksi) becomes greater than its “corpuscular” one, forcing s1(sigma) to decrease under the value zero and turn negative. However, the physical context is very different from the one found in space-time relativity. In space-time relativity, the “absolute area” s² = c²t² - x² = c²t²(1 – vmoy²/c²), where vmoy = x/t stood for the pure value of the mean velocity of a moving body, couldn’t turn negative without going out of the observation scope. This was because the body would then move faster than the signal it produces, arriving always before it. Now, physical bodies are observed through the signal they emit. If they arrive before it, they’re non-observable… Here, nothing of this happens. What happens instead is we’re in a space with an open geometry (technically, “of hyperbolic type” – archetype: the horse saddle), like space-time, but without any specific restriction. In comparison, classical space had a closed geometry (“of elliptic type” – archetype: the rugby ball). Such a geometry allows only one sign to areas, the positive one.
The same holds for s2(sigma). We can always transform it noticing that:
2x1(ksi)x2(ksi) = ½ {[x1(ksi) + x2(ksi)]² – [x1(ksi) – x2(ksi)]²}
which exhibits the same structure than s1(sigma). And it goes on changing sign as quantum states go round the unit-radius circle. There’s no conceptual objection to finding negative areas in the quantum context, even projections, because there’s no definite sign in either s1(sigma) or s2(sigma), it’s now only a question of which behavior predominates on the other in the side x(ksi) of the quantum square.
You can straightforwardly generalize this to quantum rectangles. Taking two quantum sides x(ksi) and y(psi), the area of the quantum rectangle will be:
(6) s(sigma) = x(ksi)y(psi) = [x(0)y(0) , ksi + psi]
Then, you examine sigma sector by sector: results will be the same. It’s just a bit more complicated because you now deal with two quantum states ksi and psi instead of a single one. You find more combinations for a given sigma, namely:
(7) sigma = ksi + psi
instead of (3). So, instead of finding a single value ksi = sigma/2 as for the square, you find a continuous infinity of possibilities psi = sigma – ksi for each value of sigma.
Quantum volumes proceed the same. In place of (1), you find the quantum volume of the quantum cube:
(8) v(stigma) = [x(ksi)]3 = [v(0) , stigma]
(9) v(0) = [x(0)]3
(10) stigma = 3ksi
As x(0) is never negative, nor is v(0) but, according to the sector stigma will be found, projections v1(stigma) and v2(stigma) will be positively or negatively counted or even be zero.
For a quantum parallelepiped:
(11) v(stigma) = x(ksi)y(psi)z(zeta) = [v(0) , stigma]
(12) v(0) = x(0)y(0)z(0)
(13) stigma = ksi + psi + zeta
which, for each given value of stigma, draws a straight line, not in a “2D quantum state” anymore, but in a “3D” one. That’s a double continuous infinity of possibilities for zeta = stigma – ksi – psi.
In comparison, (9) or (12) show you again that classical volumes can only be found positive or zero.
Commentaires textes : Écrire
B 136: ON A NEW MODEL FOR MIND
Le 19/11/2017
This bidouille, again, for a large public (or, at least, for the public who’s NOT AT LARGE… or not yet… :) ).
For what follows, I’m basing myself on what neurobiologists tell us. Let’s sum it up again.
Each neuron cell taken individually inside a highly-sophisticated system as the brain of mammals receives an average of 10,000 connections from other neurons, not necessarily close to it. Changeux claims it endows the soma of the cell with “a combinatorics of signals”. I disagree. Completely. Why? Because this addition of signals is then compared to a threshold, generating a “trigger effect” and the output, by the end of the axon, will ultimately be a binary (“0”: silent, “1”: active). You find exactly the same kind of dynamics inside inert medias like silicium, silicium / manganese, etc. that are used in the computer-making industry, it’s known as the “transistor effect” and it leads to no arithmetic function at all in the device…
So, if you look at possible arithmetic functions inside the neuron, you may be deceived…
Following this, Changeux precises, and this is very important, that, in most of the neurons composing the central nervous system (CNS – the brain), the synaptic cleft between two neurons is so small that only one pack of neurotransmitter can go through and, the crucial point is here: not systematically, even when the emitting neuron is active.
So, again, better not rely on the neuron in itself to forward information… This is good for much larger synaptic clefts such as, he explains, that between an axon of the motor system and a muscle, where some 300 packs of neurotransmitter can be scattered at the same time, guaranteeing a 100% transmission.
Conclusion: the more packs to be scattered, the higher the chance to vehicle information from an emitting neuron to a receiving one or, equivalently, the closer to 1 the “transmission coefficient T” of the cleft (to use an analogy with optics).
Unfortunately, this conclusion leaves the question of cerebral neurons wide open…
Basically, the isolated neuron can work 3 ways. It can be stimulated from the outside and, if that stimulation is higher than the trigger threshold value, the neuron responds. It can self-stimulate, thanks to its calcium channels. Or it can remain silent. But, whatever its reaction, the release of neurotransmitter is not 100% guaranteed in most situations.
Clearly, the neuron by itself (and I insist on this) cannot be kept as a “serious enough partner” for signal propagation and still less considered as an “arithmetic unit”.
Clearly, there must be another mechanism that improves signal transfer and “reinforces synapses”. Neurobiologists have now been knowing the complete dynamics of the neuron from soma receptors down to the very ends of their axon for 30-40 years and they still stumble on how to link it with the production of “mental objects” (percepts, memory images and concepts). I pointed out many times the way the cell works is by no means causal. So, two main hypothesis have been proposed in order to palliate this “little inconvenient”. The first one is to model the functioning of the brain giving inter-neuron transmission a “probability of occurrence” and arguing the machinery would be Bayesian (from Bayes, who established rules on probabilities for connected events). However, as I said, this would be equivalent to allowing “pieces of signals” whereas signal is transmitted as a whole or isn’t. One more difference between mathematics and physics. The second hypothesis was Edelman’s “neural groups”, where populations of interconnected neurons of various numbers would collectively respond to an excitation on any of a single member of the group, through a global mechanism of “consistent resonance”. Changeux wasn’t convinced, as datas also showed that there isn’t any static organization of any kind inside the CNS; instead, everything is dynamical and configurations change all the time. Besides, Edelman agrees in that a given instruction can be forwarded through different networks, as long as it’s forwarded and leads to the same result(s). Changeux sees this ability, that “plasticity” (or adaptation faculty) of the brain as a result of a “jungle” of connections rather than specified or dedicated units like in computers. Edelman too is strongly against comparing the animal brain to any Turing-Von Neumann sequential machine, they both say it doesn’t match observational datas at all. The difference between them is that Edelman bases himself on the existence of “neural groups” to define his “noetic” machines and even build prototypes. The thing is: even the first 2 prototypes already reasoned closer to the animal brain than to a T-VN machine!
It’s therefore very hard to decide which way, which representation, is the best one and the closest to reality, as they all show their inconvenients but also their advantages…
I’m a very basic guy, probably one of the most basic you’ll find, so I always end in going back to the very bases.
And the base is: we have two neurons and, between them, a certain type of a neurotransmitter. One of the neurons is the emitter, the other one is the receiver.
Questions: which ones are the sources and which ones are the mediators?
Answers: sources are neuron cells, mediators are neurotransmitters.
Question: what could mind be made of, then? Neurons? No: neurons make the substrate.
Conclusion?
Mind would be made as a field of neurotransmitters.
Indeed, what makes the mental process? Is it the biological substrate, which produces the signal with no certainty and no causality at all? Or isn’t it rather the transmission of information from one unit of that substrate to another?
Everywhere in Nature, you have “supports” of information and transmission. Saying mind would be made of neurons would be equivalent to saying that electrons make the electro-magnetic interaction or that masses make the gravitational one, quarks the strong nuclear one, etc. It would be confusing the sources with the vehicles.
That mind is a biochemical process is now beyond all doubt. But, if we search for an understanding in the internal mechanics of the neuron, we just find nothing consistent enough able to build mental objects.
The vehicle of information in computers is the electric current. It’s made of moving electrons. Is the internal dynamics of transistors for anything in this? No: what’s relevant is what we have as inputs and what we get as an output, period. When we create programs, in order to run machines, we don’t care about what goes on inside transistors, we take input and output bits and we combine them in order to first make basic instructions, then instructions, then programs. We use the vehicles of information, not the substrate. The substrate is there to produce information. Now, mind is information and this kind of information is only chemical, molecular. As between any non-neural cell of a living organism: two living cells communicate exchanging molecules.
Now, if we base ourselves on the “neural jungle”, there’s potentially no way to build consistent patterns. In order to do so, we need structure, consistency and stability. We need an organization, should it only be ephemeral and changing with time. Stability becomes a necessity for memorization, especially long-term.
Well, in all physical systems, such properties are only accessible to non-linearities and feedbacks.
So, maybe we’d rather look at the feedbacks of the field of neurotransmitters onto the output of neurons, because only there is transmission occurring.
What would be the basic requirement for mental processes to perform?
That information be suitably forwarded from one neuron to the other.
According to what we saw, this requires an optimality criterion, namely, that the transmission factor T between two given neurons reaches 1. That’s 100% chance of transmission. If we reach it, that “path” is “secured”. If we want to change path, we favor another transmission factor, somewhere else, and decrease the previous one.
This is nothing but a regulating process and, as Changeux defined it, consciousness is that process which regulates mind.
So, what we get here is mind, now realized as a molecular field of neurotransmitters of various types, and a regulation process answering an optimality criterion, which helps dynamically structuring mind and we call consciousness. Patterns change with time, but for memory images, which remain stable much longer. Such “long-living” patterns correspond to fixed points in dynamical systems: the state they were “in the last round” remains unchanged “in the new one”. And this, for a certain duration, that can last all life.
Let’s sum it up once more.
A neuron produces a type of neurotransmitter with only the probability T. That pack of neurotransmitters is then received by another neuron (up to possible leaks): information is transferred. The neurotransmitter is then destroyed. During molecular transfer, there’s a “quantum” of “mental information” produced. This is local. Globally now, or “less locally”, there’s a set of such “quanta”, of various types, making mind at a given time t. That structure, in turn, acts upon the synapses to reinforce their biological reactivity and, therefore, locally increase the transmission coefficient T. The next round, the same neuron will produce its neurotransmitters with a higher T. Again, mind will retro-act on its synapses until T reaches 1. However, that process is spatial: it concerns a synapse located at some point “x” of the brain. There still remains the possibility of a change in time. Changing neurons, we change network configurations (while neurons, of course, don’t move). Patterns can change shape.
What gave me this idea is first the arguments I exposed hereabove and, second, the analogy I checked with 20th-century so-called “semi-classical phenomenological models” of interacting particle physics. Typically, you have a source field and a field for interaction. The source field is made of particles which produce “quanta” of interaction. What happens is that the source and the interaction it produces strongly couple. If you have a system of electric charges carried by electrons, for instance, and these electrons produce an electromagnetic field between them, this field, as long as it stays inside the system of charges (inside the electronic source field) then retro-acts onto these charges, modifying its dynamics. And so on, until an equilibrium is found. It can be mechanical or thermodynamical. When it cannot be found, we’re in a situation of chaos. And there too, there are very interesting patterns.
I think we’d rather explore this road instead of the one consisting in believing the neuron cell, because of its axon, would serve as a “wire” to transmit information. Nothing consistent (I don’t even say “logic” or “rational”, I only say “consistent”) can get out of this. What may fool us is that the transistor, which is an inert object, has a deterministic functioning: according to its internal structure, the minerals used, the inputs, it will deliver “0” (blocked) or “1” (saturated). So, maybe the partisans of Bayesian logic (or any other fuzzy logic) would think the neuron “transmit the nervous signal with a proba of T”. It doesn’t seem to do so. It rather seems to get inputs, deliver an output, a non-determinsitic way, because it’s a living cell… :)
Commentaires textes : Écrire
B 135: BACK TO THE SOURCES OF THE QUANTUM
Le 06/11/2017
One might oppose me that the unexpected result in B134 could always be re-established normal by using the modulo 2pi cyclic property. However, the result would still remain unclear, as it would be equivalent to multiplying by 1… :(
Here’s an article that is made for the largest public possible. Specialists will find unavoidable repetitions in it, but non-technicians are everything but familiar with the highly-sophisticated technical developments which led to the present 21st-century approach of quantum physics.
From the very first discoveries of atomic processes by the end of the 19th century to the most synthetic models of quantum theory proposed in the late 20th century, the fundamental idea that quantum processes were genuinely wavy, i.e. oscillating, drove all the developments for more than a 100 years. It culminated by the end of the 1960s with “supersymmetric” models. These models were aimed at trying to unify matter and radiations at the level of “elementary” (i.e. non-composite) particles, but the principles upon which they were built didn’t restrict at all to the sub-nuclear level of description. I then decided to extend them, not only to much larger bodies with a much more complicated structure, but to everything, to begin with that “wave-corpuscle” duality. This is nothing else but Schrödinger’s “doctrine”, which says that absolutely all physical systems in Nature are to be endowed with a natural quantum structure. Supersymmetry only showed that the Schrödinger representation of the world was actually equivalent to doubling everything the classical approach described earlier. So, I’m inventing nothing, introducing nothing “revolutionary”, I’m just following the masters.
Basically, all these supersymmetric models which, I’d like to insist on that point, didn’t get “out of theoreticians’ imagination”, but instead, were a direct consequence of an accumulation of observational evidences in particle accelerators all along the 20th century, all these models were built on the assumption that the deep physical reality is oscillating: everything in Nature naturally oscillates, down to space and time themselves, so that today’s question is no longer about “what are the physical mechanisms that damp these oscillations at scales higher than the sub-nuclear one?”, but “why don’t we directly observe and feel these oscillations in our current life?”. Surely, consciousness has something to do with this and we’re sent back to that central idea in quantum theory of that “interaction between the human observer and his surrounding environment”. But, this is not the subject of the present article.
The subject of the present article is to start from that conclusion that, in order to oscillate, all physical objects, events and phenomena in Nature needs be doubled. It just cannot work otherwise. If we refuse to double things, then we conflict with observational evidences: it’s as simple as that. We don’t do this because it “suits” us, but because experimental facts impose it to us. Science is everything but speculation, it’s, on the contrary, perpetual deduction. So, let’s explain how it works. We’ll have no choice but to do a little bit of elementary math, but everything will be explained step by step in detail.
Let’s begin with a practical example that will also set a bit of terminology.
Let x(0) designate a “pure distance” between two objects or between the observer we are and an object we want to observe. “Pure” means “absolute”, that is, “unsigned”. By convention, an unsigned quantity is always positive or zero. There’s nothing absurd at all in demanding this to x(0): don’t we usually measure 1 meter and not -1 meter?
Let’s now associate and angle ksi to x(0). ksi is a quantity that stands between 0 and 2pi radians (or 360°, that’s a complete tour around the unit-radius circle), so that, every time we had 2pi, we make a complete tour and we retrieve our original ksi (same operture). Because of this cyclic property of angles, they can be given any value in the continuum, that value can always be brought back to the interval [0,2pi] “up to a certain discrete number of tours”.
We therefore starts with this pair [x(0),ksi] and we call it a quantum distance in polar representation. We then call for trigonometric functions, which are built as continuous functions on the unit-radius circle and we define the “first (or “horizontal”) projection”:
(1) x1(ksi) = x(0)cos(ksi) (cos = cosine)
and the “second (or “vertical”) projection”,
(2) x2(ksi) = x(0)sin(ksi) (sin = sine)
Both obviously depend on the angle ksi. In order to understand something to what may happen, we have to be very methodical. We call:
- ksi, the quantum state in which our quantum distance x(ksi) = [x(0),ksi] is found;
- x(ksi) = [x1(ksi),x2(ksi)], the very same quantum distance, but now in planar (or “Cartesian”) representation;
- x1(ksi), the “corpuscular-like” distance (or simply “distance”);
- and x2(ksi), the “wavy-like” distance or wavelength.
Why this terminology? When the experimenter is going to evaluate that quantum distance x(ksi), he/she’s going to run two (series of) experiments. The first one is aimed at revealing the corpuscular behavior of that distance. Namely, the experimenter wants to put the light on the “little hard balls” that would serve as “solid vehicles” of space. He/she wants to exhibit the “granular structure of space”. A measure of x1(ksi) will give him/her this information. The second (series of) experiment(s) is aimed at revealing the wavy nature of space. This time, he/she sees space as a continuum or as a “signal” and x2(ksi) will give him/her this second information. From these two complementary informations, he/she’ll be able to deduce the quantum distance they’re searching for:
(3) [x(0)]² = [x1(ksi)]² + [x2(ksi)]² (Pythagoras’ triangle)
will give the pure distance x(0), while
(4) ksi = Arctan[x2(ksi)/x1(ksi)] (Arctan = Arc tangente)
will give the (main determination of) quantum state. You’ll notice x(0) no longer depends on ksi. This is because of the fundamental trigonometric relation cos² + sin² = 1.
So, what our experimenter actually do when he/she does his/her measurements is: he/she projects the observed quantum “entity” (here, a distance) onto a “corpuscular axis” and a “wavy axis”. If he/she works in more than 1 dimension, each of these axis become a space if not a space-time (with the same number of physical dimensions).
Yet, we forgot something important. We forgot that our experimenter is actually a human being and, as such, behaves as if he/she was a physical entity of the first projection only. This is because we see us as “mostly if not entirely substantial” and, as “substantial” rimes with “corpuscular”, we quite naturally place ourselves in the “corpuscular projection”. However, this is not what quantum theory tells us. Quantum theory says we should take into account the existence of a “double”, we, as “corpuscular observers” see as “wavy” and who stands in the second projection. But then, if we follow quantum theory and substitute the roles, this “wavy” observer, that “double” of ours, considers him/herself in turn as “substantial” as we consider ourselves in his/her own space(-time) and now sees us as “wavy doubles”. There’s a necessary reciprocity in the way things are interpreted by both observers, because this is merely a question of perception, but the quantum reality isn’t this: the quantum reality says
There exist a single entity, it’s neither “substantial” nor “etheric”, it’s “quantum” and it represents a brand new form of existence with no familiar equivalent.
Here’s what quantum theory says. It only remind us that, realizing experiments on the “corpuscular” or the “wavy” behavior of quantum objects is only aimed at reducing a reality we hardly grasp to more familiar behaviors, accessible to our perception.
There’s a widely used quantity to describe waves, it’s called the “wave number” k and it’s defined as the inverse of the wavelength multiplied by 2pi radians. These wave numbers are going to help us better visualize that complementary between our two observers’ perceptions.
We take our “corpuscular-like P1-observer” first, that’s us in common life. As we saw, he/she perceives the distance x1(ksi) as “corpuscular” because it stands in the same space(-time) as he/she. As he/she perceives x2(ksi) as a “wavelength”, he/she will associate it with a wave number, caution: in his/her space. That’s a k1(kappa): k1, referring to P1! So, he/she’ll write:
(5) k1(kappa) = 2pi/x2(ksi)
Our “wavy-like P2-observer” will react the same in his/her space(-time): he/she’ll now see x2(ksi) as “corpuscular” and x1(ksi) as “wavy”, therefore associating a wave number:
(6) k2(kappa) = 2pi/x1(ksi)
Indeed, the sine function can always be identified with a cosine one and (2) also writes:
(7) x2(ksi) = x(0)cos(ksi – pi/2)
which corresponds to a “corpuscular distance” delayed a ¼ of a tour on the unit-radius circle. Conversely, x1(ksi) can always be identified with a “wavy distance” advanced a ¼ of a tour:
(8) x1(ksi) = x(0)sin(ksi + pi/2)
and this 90° shift precisely corresponds to exchanging P1 and P2… so, you see the two are really complementary to one another and the distinction between “corpuscular” and “wavy” behavior is, in the quantum world, only a matter of perception…
Why introducing a different quantum state for wave numbers? Because:
(9) k(0) = (2pi){1/[x2(ksi)]² + 1/[x1(ksi)]²}1/2 = 2pix(0)/|x1(ksi)x2(ksi)|
= 2pi/x(0)|sin(2ksi)|
(10) kappa = -ksi
kappa is opposite to ksi.
Let’s now examine some particular values of ksi.
When ksi = 0, x1(0) = x(0) and x2(0) = 0: a P1-observer will perceive x(ksi) as “entirely corpuscular” and “ahead of him/her”. A P2-observer will perceive it as “entirely wavy”. Notice that x(0) is also the maximal distance both projections can reach, as sine and cosine functions stay between -1 and +1.
When ksi = pi/2 (90°), x1(pi/2) = 0 and x2(pi/2) = x(0): roles are permuted; a P1-observer will perceive x(ksi) as “entirely wavy”; a P2-observer, as “entirely corpuscular”.
When ksi = pi (180°), x1(pi) = -x(0) and x2(pi) = 0: same as for ksi = 0, except that x(ksi) is perceived behind these observers.
Finally, when ksi = 3pi/2 (270°), x1(3pi/2) = 0 and x2(3pi/2) = -x(0): same as ksi = pi/2, except for x(ksi) standing behind.
All other values of ksi are quantum, as they mix both projections.
We can proceed the same for anything else but space. It applies to time, masses, etc.
Quantum states, offering additional degrees of freedom, open onto new physical dimensions.
Basically, we have four “sectors”:
- sector I, ksi is between 0 and pi/2, x1(ksi) and x2(ksi) are both positively-counted (both “ahead of observers”);
- sector II, ksi is between pi/2 and pi, x1(ksi) is negatively-counted (“behind”) while x2(ksi) is still positively-counted (“ahead”);
- sector III, ksi is between pi and 3pi/2, x1(ksi) and x2(ksi) are both negatively-counted (both “behind”);
- and sector IV, ksi is between 3pi/2 and 2pi, x1(ksi) is positively-counted (“ahead”) again while x2(ksi) is still negatively-counted (“behind”).
When applied to something like space, it brings nothing we aren’t familiar with. When applied to time, first we find that concept of a “corpuscular” time made of (still hypothetical) particles we could name “chronons” and that concept of a “wavy time” that still doesn’t speak a lot to us. Let’s t(tau) be quantum time. Then:
- in sector I, both t1(tau) and t2(tau) point towards the “future”;
- in sector II, t1(tau) points towards the “past”, while t2(tau) still points toward the “future”;
- in sector III, both t1(tau) and t2(tau) point towards the “past”;
- in sector IV, t1(tau) points towards the “future” again, while t2(tau) still points toward the “past”.
So, we have this “alternance” between “future” and “past”, while “present” corresponds to tau = pi/2 or 3pi/2 from a P1-observer’s perspective [t1(pi/2) = t1(3pi/2) = 0] and to tau = 0 or pi from a P2-observer’s perspective [t2(0) = t2(pi) = 0]. But, as always, these are mere perceptions: in the quantum, there is no such thing as “past”, “present” or “future”. How could there be, if one is free to go “back to the future”?... :)
When applied to mass, it turns real weird for the experimenter. Let’s m(mu) be a quantum mass. Then:
- sector I predicts m1(mu) and m2(mu) will both be positively-counted. A 20th-century experimenter would have interpreted this as a “particle of matter”;
- sector II predicts m1(mu) negatively-counted while m2(mu) remains positively-counted;
- sector III predicts m1(mu) and m2(mu) will both be negatively-counted. Our 20th-century experimenter would have interpreted this as a “particle of antimatter”;
- finally, sector IV predicts m1(mu) positively-counted again while m2(mu) remains negatively-counted.
Our 20th-century observer, may he/she “belong to” P1 or P2 would have been for sure completely disoriented with sectors II and IV, because it just didn’t match his/her belief. For him/her, a quantum particle had either negative or positive energy at rest (which is equivalent to mass through the Einstein relation E = mc²), but whatever its sign, it would have concerned both the “corpuscular” and the “wavy” components. Now, to my knowledge, there is no selection rules yet to assert that both projections should have same sign… and, anyway, this is again a false problem, because the mass of a quantum particle at rest is the “pure mass at rest” m(0), which is always a non-negative quantity. So, the quantum principle applied to mass now tells us nothing else but this:
Opposite to our perceptions of things, there’s nothing in the quantum world as “antimatter”, i.e. “matter with negative energy at rest”. Instead, there is quantum matter with mass at rest m(0) in a quantum state mu.
And, according to the sector that quantum state is found, we interpret the mass components as being “signed”. However, there’s no “sign” in the quantum, there’s a position on the circle.
And this is mathematically proven: if you can attribute a definite sign to a single quantity, how can you to a pair of such quantities? For a number, you have two possible combinations: +x or -x; for a pair (x,y), you have four: (+x,+y), (+x,-y), (-x,+y) and (-x,-y). Only the first and fourth ones can be attributed a definite sign, because that sign is common to both components. But, what about the two others? You can’t…
On the contrary, depending on the position (the “angular operture”) you’ll occupy on the circle, you can immediately generate all four combinations… J
So, unless there comes selection rules to forbid (+,-) and (-,+) combinations, we can’t exclude them. Now, I emit serious doubts about the existence of such rules, because they would “spoil” the very definition of a quantum mass. And why would they apply to mass and not to space, to time and to anything else, then?... :|
I’d suggest a better explanation. PAM Dirac himself wasn’t satisfied with his own introduction of “particles with negative energies”, he preferred to see them as “anti-particles with positive energies”, drawing an analogy with solid-state physics, where “holes” in the “energy sea” replaced particles with positive energies. At that time (1920s), it was still assumed that free particles had to have positive energies or, at least, zero. States with negative energies were attributed to linked systems. And, as the wave-corpuscle duality was proposed precisely because one couldn’t separate the corpuscle from the wave anymore, then, all logically, people assumed that, if a component had a sign, the other one should carry the same. But keep in mind this was in a non-oscillating space, non-oscillating time. Making everything oscillates changes the entire picture… We don’t need to struggle with “particles” and “anti-particles” anymore, we now understand better why projections are not reality at all, but severe reductions of it,… We work in a radically transformed frame… We can see there’s no objection to having a “corpuscle” with positive energy and a “wave” with negative energy: the two do not interact with each other, there’s no “two”, there’s one and that one is just allowed to carry two signs instead of a single one…
Should this shock the community? I don’t think so. After all, gluons carry two colors… :|
All experiments throughout the past century were made out of the calculations from a 4D space-time. Surely, an 8D one should lead to radically different results…
So, maybe we didn’t discover quantum particles in mass sectors II and IV because we didn’t search for them… because our experiments were based on assumptions that quantum particles should only belong to sector I or III… because our theoretical models all founded on mirror symmetry… and from the time you impose a quantity like energy to remain a real number and not a pair of real numbers… well, you necessarily limit your possibilities…
Next time, I’ll talk about areas and volumes.
Commentaires textes : Écrire
B134: A HUGE CONSTRUCTION PB WITH C...
Le 28/10/2017
After being absent of the blog for a VERY long time, but not without actively researching for as much, I’m back with a HUGE problem on complex numbers and that’s why I’d like to be back to the old “bidouille” B32 [inside which you’ll be kind enough to replace, in eq (20), T’-+- with -T-+-]. Here’s indeed a reasoning that leads to a stunning contradiction within the structure itself of the algebraic field C.
If we combine the cyclic properties of the imaginary unit i:
(1) i0 = 1 , i1 = i , i2 = -1 , i3 = -i , i4+k = i4ik = ik
for all non-negative integers k, with the relation out of the de Moivre formula,
(2) i = eipi/2
We’re led to:
(3) pi = 0…
which is obviously absurd. Let’s prove it straight away.
Eqs (1) shows a base-4 cycle on the integer powers of i. (1a) is not only a convention, it’s also proven by the behavior of power functions near the value zero of the argument. (1b) is the very definition of i as the “imaginary unit” within the field C. (1c) is Cardan’s original definition. Finally, (1d) is the combination of (1a and b), which also defines the complex conjugate of i.
Eq (2), again, really looks like a wonderful formula, linking the only two known irrationals e and pi to i. So, let us now use that last relation to compute ii:
(4) ii = (eipi/2)i = ei²pi/2 = e-pi/2
The result is real-valued and, apparently, irrational. The fact that it’s real-valued is already quite surprising, from a conceptual viewpoint. But, anyway. Let’s take the square of (4):
(5) (ii)² = i2i = (i²)i = (-1)i = e-pi
The problem arises with the square of that square:
(6) (ii)4 = i4i = (i4)i = (+1)i = e-2pi
Let’s point out that we only used here the algebraic properties of the elevation to a power. Now, (+1) elevated to any power, real or complex-valued, should, by definition, give +1. So, we should have:
(7) (+1)i = +1
immediately inducing (3) after taking the Neperian logarithm of (6).
Search for the error… I did, and didn’t find any… This totally absurd result is not even due to the cyclic property (1e) or only partially. The true responsible is actually the rule of signs on the multiplication table for real numbers:
(8) (-1)(+1) = i²i0 = i² = (-1) , (-1)² = i²i² = i4 = i0 = +1 , (+1)² = i0i0 = +1
which perfectly works when extended to the imaginary unit.
In any case, cyclic properties (1), which are formally equivalent to a modulo 4, more than clearly show that i should now be taken as a more fundamental unit than +1 and -1, since both can be generated as integral powers of i. Furthermore, cyclicity offers an infinite countable way of obtaining -1 and +1. In R, we only had a modulo 2 cycle:
(9) (-1)0 = +1 , (-1)1 = -1
And, because C was built as an algebraic field, it’s both a unifere and integer ring, meaning all its non-zero elements are invertible and it has a unit one. Which should be?... +1, -1, i or 1 + i? The inverse of i is defined (and obtained) as:
(10) i-1 = i* = -i = i3
because of (1e) for k = 0. It’s also confirmed by (2).
Get me right: if there had been any contradiction in the algebraic laws upon which C was built, it would have immediately been detected (and corrected). What leads to the absurd result (3) is actually a consequence of the construction of that algebraic field basing oneself on the choice of a unit number: C was actually built mixing real numbers with new ones, the square of which were allowed to be negative, so that C appeared as an algebraic extension of R. But we still need to know which element is to play the role of a unit… K If it’s i, since it looks more fundamental than +1 and -1, then we should find i² = i and i-1 = i, by the very definition of a unit element…
SO… folks… should we REALLY forget about “complex” numbers?... I’m just wondering about this, because we have CONCRETE (and no longer “imaginary”…) CONSTRUCTION PROBLEMS…
And I discovered that one only because I was after a physical significance of i. Before quantum theory, i was used as a mere convenient math tool to easy calculations but, ultimately, classical waves appeared as real-valued trigo functions: cos, sin, tan,… Since quantum theory and the de Broglie “wave-corpuscle duality”, it little by little became obvious that we could no longer satisfy ourselves with the “real-part” of a math wave, but that we really had to take both the real and the “imaginary” parts into account and on an equal footing. This, because quantum waves behave like refraction index: they have a “refraction” component and a “reflection” one, and you just can’t neglect the second one. This was going in the direction of i acquiring a physical content, as a “fundamental quantum unit”, namely [see (2) once again], as a quantum wave with unit amplitude and constant phase (or “quantum state”) pi/2. There was no classical equivalent to that wave, since it’s real component is zero…
Unfortunately, there are obvious contradictions within the construction of C that does not allow me to go further in that direction.
Maybe I’ll need to go back to B32. Maybe I’ll instead have to change from R to R². After all, this is the same as “doubling R” and it’s even much cleaner than identifying C with R². And use M2(R), where units are correctly defined, for operator ring acting upon 2-component vectors of R².
However, the distinction between “classical” and “quantum” will need to be reviewed, because it will then be far less obvious than when introducing an “imaginary unit”.
Commentaires textes : Écrire
B133: BACK TO THE "POLARITIES" PROBLEM
Le 06/01/2017
BACK TO THE “POLARITIES” PROBLEM
I’ve studied many times before what I called the problem of “polarities”, in a different sense than the one usually refers to in particle physics (and related to intrinsic rotational momentum of particles – or “spin”). What I mean by “polarities” here refers to the sign of masses or, equivalently, of energies.
Mass is not to be confused with substance: it’s only a property of substance (as is the electric charge or more complicated charges). Besides, we say that a physical body “carries” a mass.
However, the type of substance will set the sign of its mass (opposite to all other charges) and, reciprocally, knowing the sign of a mass will determine the kind of substance we’re dealing with. It’s usually assumed (one more convention) that the sign of “substance” (matter or radiation) is positive or zero, while the sign of “anti-substance” (antimatter or anti-radiation) is negative or zero.
Galilean physics (space relativity, universal time) assumes that all masses are strictly positive and, indeed, human-scale or cosmological-scale antimatter has never been observed so far. This does not mean for as much there isn’t in our observable universe, but this is now assumed to be very unlikely for, if there was, there would unavoidably be interactions between that antimatter and nearby matter, so that huge radiative jets would be detected with today’s equipment.
This “no see” fact is, after all, rather logical. If we’re fair enough, at the best, only fundamental particles can be considered as keeping a constant mass (off interactions, of course, in which case, there are transformations). All other bodies have a more or less variable mass. Mass varies in time simply because there are no closed systems (apart, maybe, from the whole universe itself), so that physical systems exchange with their surrounding environment and this results, in particular, in substance transfer: you feed, you gain mass (in our approximately constant earth gravity field, this can be seen as equivalently saying we gain weight); you starve, you loose mass (or weight). According to the direction the substance current points towards, substance is brought to a system (from the outside environment into the system) or leaves that system (from the system into the outside environment).
It’s common sense that, given a system with initial mass m(0) > 0, if this system keeps on loosing substance in time, there will be an instant, say tf (“f” for “final”) when there’s no more substance inside the system’s volume: m(tf) = 0. It’s then perfectly logical (and fully observable too) to consider there’s no system anymore... So, it would be difficult to take any more substance out of a substance-free volume... That’s the main reason and justification why mass, in Galilean physics, is set to always be strictly positive, whatever its evolution in time: “you can’t loose more than what you have”.
Space-time relativity (Galileo + relative time) does not change this vision of things, despite the famous energy relation E² = p²c² + E0² with E0 = m0c² the “energy at rest” of a physical system (m0 its mass at rest and c, the velocity of light), enables both signs:
(1) E² = p²c² + E0² => E± = ±(p²c² + E0²)1/2
(and diametrically opposite too) so that, if you set the momentum p to zero (in which case, your system is fixed in space), then (1) gives you:
(2) E± = ±(E0²)1/2 = ±|E0| = ±|m0|c²
Absolute values of the energy and mass at rest, up to the sign. These are the mathematically allowed solutions to the quadratic (“power 2”) relation (1). Classical spacetime relativity keeps only the (+) solution, still sticking to Galileo’s frame.
Quantum physics allows negative masses, because their justification is now based on a different assumption. The quantization process takes the energy and momentum of a classical system and puts them into the phase of a “wavefunction”, a process known in mathematics as “exponential lifting” (I shall not elaborate here, it’s in all refs online – and extensively on this blog as well - !). As quantum physicists use signals and signal couplings (to describe particle interactions), there’s no more “physical obstruction to the law of common sense” in changing the sign of the phase: both signs are perfectly observed and the same way. As a result, if you keep the same orientation for your space and time, then a change of sign on the phase will change the sign of your momentum (space related) and energy (time related).
Should I precise here that we’re actually talking, from the beginning, of free systems.
The problem on the sign of mass (or energy) concerns free (i.e. non interacting) systems.
In linked systems of bodies, a negative energy of the system is accepted (and observed!) as soon as Galileo’s space relativity. It’s even what characterizes linked systems: that, to set components free (to “dismantle the system”), you need to bring positive energy to that system, from the outside.
The dilemma was on free systems. Classical physics did not observe them. The best spacetime relativity brought was to accept fundamental waves like electromagnetic or gravitational ones as massless substances. “Immaterial”. It allowed the possibility that m0 = 0, under the (less than “mathematically correct”) condition that such substances “moves” through space at the speed of light. It holds because, so far, no massive bodies have been found to move at c (as Heisenberg said of the Planck constant h and quantum “wave mechanics”, “it holds because that’s what we observe”... – at least, it had the merit of being honest, recognizing the fact that nobody could explain why it was so – which is still the case).
However, all this (Galileo, Einstein, quantum physics) was developed (because observed) in a 4-dimensional real geometrical frame.
It no longer holds in a 4-dimensional complex geometrical frame... even if we assume the “hermitian” hypothesis, which is the (very rude) mathematical translation for “mirror symmetry” (the name comes from mathematician Hermite). Usually, theoretical physicists continue assuming real-valued masses. But it’s no longer a pre-requisite, even under super-symmetry, where the masses of partners are equal.
Let’s review once again the connections between dynamics and geometry. It goes back up to the late 18th century – early 19th, when people began to use the geometrical tool to describe dynamics. As geometry was quickly progressing, they found it useful to “code” dynamics into geometrical terms. So, they intensively began developing a multitude of spaces with geometries suitable for the type of dynamics they were studying. The result was called “analytical mechanics” and received most of its contributions from people like Lagrange, Hamilton and Jacobi, all along the 19th century. They based themselves on works from geometricians like Gauss, Riemann and (much later, in the second half of the 20th century) Grassmann. Jordan, not-the-NBA-but-mathematician-trained superstar turned physicist (shame!... what do I say? HERESIA!), brought matrix theory to the quantum formalism in the late 1920s, following 1926 Heisenberg’s description.
Little by little, they came to the following correspondences:
Riemann’s geometrical axiom <-> commutative geometry [xy = yx, see B131, formula (1) or (2)] <-> spaces with a symmetric metric <-> radiations (Bose-Einstein stat, integer spins);
Grassmann’s geometrical axiom <-> anti-commutative geometry [xy = -yx, B131, formulas (3) or (4)] or “projective” geometry (because related with projections onto planes rather than axis) <-> spaces with a skew-symmetric metric <-> matter (Fermi-Dirac stat, half-integer spins);
Kähler’s geometrical axiom, a synthesis of the preceding two, with “enough good regularity conditions” (“smooth” spacetimes or “continua”) <-> complex geometry <-> spaces with a hermitian metric <-> supersymmetry between matter and radiation.
It’s worth noticing that Grassman’s geometry was required in connection with the so-called “phase spaces” of analytical mechanics. Typically, these are spaces where the complete “classical” motion of a given body or system of bodies is well described in geometrical terms. To describe such motions, we need to determine, following “classical” laws of motion, both the position of a system in space (or spacetime) and its “quantity of motion” (momentum, the product of mass with velocity, or energy-momentum in spacetime dynamics). This doubles the number of required dimensions of the “configuration space”, as there are as many momentum components as there are coordinates of position, but it does not endow that “enlarged” space with a complex structure for as much. What we get instead is a “symplectic” structure (another vulgar math term) that describes the dynamics in a “space of planes” (planes come to replace axis for “coordinate systems”) and the geometry of such spaces no longer obeys Riemann’s axiom, but rather Grassmann’s. The connection with quantum mechanics was later made when people realized that the spin of a particle can be used as the classical “rotational momentum”, vector product [see B132, § beginning with “but let’s now reverse the problem”, before formula (6)] of the position and the momentum of the system and this, despite the fact that the spin is a purely quantum quantity with no classical equivalent. Hence the use of “q variables” or “coordinates” for spin-1/2 (the most fundamental half-integer spin), with anti-commuting properties and the connection I made in B131, formula (5) between these anti-commuting variables (measured in m1/2) and “more familiar” commuting variables xi+4. What enables this are the Pauli “transition” matrices.
I had to explain all this before coming to the subject, or the non-familiar reader would have understood nothing (and still, I hope he grabbed something of the brief introduction I made!).
What comes directly out of complex geometry is that objects remain single (it’s very important to keep this in mind) but, when projected onto real sub-spaces, they become double: we find what we call in maths a “real part” which is here the state-(1) component of a super-quantity (in z = x+iy, it’s then x) and an “imaginary part” which is the state-(2) component of that super-quantity (that is, y).
And this is pretty understandable, if we think of it carefully. If the frame itself, that is, space and time, is to mathematically complexify, it physically means it oscillates: complex geometries are the siege of oscillating spaces, times and spacetimes. So, if the frame is the very first one to oscillate, then we can legitimately expect that any object, any event and any process within that frame will oscillate as well. We can have amplifications or dumpings, but we will always have, in addition, an oscillatory behaviour that we will never be able to suppress, whatever we do or try: objects become signals and signals become objects.
The supersymmetric frame is “essential fundamental”. It is so fundamental, “in essence”, that it actually gives birth to both matter and radiation. It’s a very “primitive” frame, in the sense of “original”. From this 4-dimensional “oscillating” frame emerge, as 4-dimensional “projections” into real sub-spaces, both matter and radiation fields. The supersymmetric association between them then says that, what is observed as behaving like “matter” (resp. “radiation”) in one of the two available “sub-worlds” will behave like “radiation” (resp. “matter”) in the other sub-world.
However, there’s more. Much more. Supersymmetry does not “only” unifies matter and radiation, giving unidentifiable very primitive substance that vaguely resembles “radiating matter” or “material radiation”, to give a rough idea of it (but is actually none of these), it also unifies substance and anti-substance. Yes, dear. And this is thanks to that “new math operation” called complex conjugation, which becomes an inherent property of supersymmetric spacetimes (whereas it desperately remains an “external” operation in real sub-spacetimes). We saw this in formula (7), B132. What complex conjugation does is it reverses the sign of the state-(2) component, while keeping that of the state-(1) component unchanged:
(3) q -> -q => x -> x , y -> -y
It’s also possible to reverse the sign of the state-(1) component while keeping that of the state-(2) unchanged, but it’s a bit more complicated and requires some additional operation. To go from z = x+iy to –z* = –x+iy, we first need to perform complex conjugation, then reverse the sign of the result (z*). As i² = -1, we can also write our result under the form –z* = i²z*. We then use the remarkable (in all sense) properties of the two known (up to now) irrational numbers, e = 2,718281828456... and p = 3,1415926535... which, combined with the imaginary unit i give this:
(4) eip = -1 , eip/2 = i
(the 3rd known fundamental number, the Euler number, is still not formally proven to be an irrational). These above formulas are truly remarkable and probably amongst the most remarkable ones in mathematics, as they “close onto each other”. Applying (4a) to our result gives us –z* = i²z* = eipz* = eipre-iq (in polar representation) = rei(p-q) (additive properties of the argument of the exponential function), that is, a phase shift of p-q from the original angle q. As a conclusion, to reverse the sign of x without touching that of y, we need reverse the sign of the phase angle q of our supersymmetric quantity z, than shift of +p or 180°.
What does this all mean?
It means that, when it comes to considering a quantity like mass, we now have to deal with an oscillating mass:
(5) M = m1 + im2 = mexp(iq) = mcosq + isinq
measuring the amount of “substance” contained inside an oscillating “super-object” (with an oscillating volume, by the way) and only the magnitude m of that mass, which is a real-valued quantity, is assumed to always be positive or zero. This is because the sign of the magnitude doesn’t matter anymore, since it’s now assured by the value of the phase angle q, so that m can always be set non negative once for all:
- if 0 < q < p/2 (sector I on the unit-radius circle), both m1 and m2 are > 0, this is interpreted in sub-worlds as “matter and radiation”;
- if p/2 < q < p (sector II), m1 < 0 while m2 > 0, “antimatter and radiation”;
- if p < q < 3p/2 (sector III – diametrically opposed to sector I), both m1 and m2 < 0, “antimatter and anti-radiation”;
- if 3p/2 < q < p (sector IV – diametrically opposed to sector II), m1 > 0 while m2 < 0, “matter and anti-radiation”.
Special values are for (up to 2p):
- q = 0, m1 = m > 0, m2 = 0 (massless substance);
- q = p/2, m1 = 0, m2 = m > 0;
- q = p, m1 = -m < 0, m2 = 0, and
- q = 3p/2, m1 = 0, m2 = -m < 0.
We can already combine “everything with everything”: substance, anti-substance, massless substance, which, again, is nothing craft ;) but perfectly logical: if “super-substance” is to give birth to both matter and radiation, it has to give birth to antimatter and anti-radiation just the same, “in an equal way”, since the sign attributed to a substance is, after all, a mere question of human convention: we would have chose the (-) sign for all the substances we observe “at our scale and beyond it”, we would have counted masses with a common negative sign...
Notice that sectors are well defined and delimited:
- adjacent sectors I and II are separated by q = p/2 (turning round counter-clockwise or trigonometric sense), where state-(1) is filled with massless bodies (such as photons, for instance, quanta of electromagnetic light);
- adjacent sectors II and III are separated by q = p, where state-(2) is filled with massless bodies;
- adjacent sectors III and IV are separated by q = 3p/2, where again state-(1)’s filled with masless bodies;
- finally, adjacent sectors IV and I are separated by q = 2p (or 0, since we’re back to our starting point – we’ve accomplished a complete turn round the “counter-clock”), where again, state-(2)’s filled with massless bodies.
We can be even more precise. As m1 changes sign when “jumping” from sector I to sector II, on the I-side, m1 tends towards 0+ (massless substance), while, on the II-side, it turns to 0- (massless anti-substance): the simple fact of changing sector changes the sign of the concerned mass component and therefore of the type of substance we’re dealing with.
This is very unlikely to be possible for “ordinary” matter or radiation, for the reason we saw at the beginning of this bidouille: in a real geometrical frame, you can’t withdraw more substance than you have.
What I want to point out here is that, in “super-substance”, there’s no reason why this should still be forbidden. No physical laws now oppose to this and this faculty is in full agreement with the fact that the notions of “substance” (m > 0) and “anti-substance” (m < 0) loose all meaning, since complex mass M being no longer a real-valued quantity, comparisons like M > 0 or M < 0 have no sense. The only relation which has a sense is M = 0, which is an equality. In this case, m1 = m2 = 0 as the magnitude m of M is zero. The magnitude m is the only real-valued mass that can be submitted to a comparison with the “universal reference” zero, indicating the absence of substance and we saw that, by convention, we can always set m ³ 0...
This was all about constant masses. But we said above that, because of substance transfers, from and to the surrounding environment, masses, in practice, were not expected to remain constant all the time. Can we extend the time-dependence of mass to the complex frame? Absolutely, taking a complex time T = t1 + it2 = teia = tcosa + itsina, we even have four ways to decompose our mass function M(T) in components:
(6) M(T) = m1(t1,t2) + im2(t1,t2) = m1(t,a) + im2(t,a) = m(t1,t2)exp[iq(t1,t2)] =
= m(t,a)exp[iq(t,a)]
as we have two possible representations for time (planar or polar) and two others for mass.
Let’s look at the last one. The magnitude m(t,a) is now variable. It can varies with t ³ 0 as with the time orientation angle a. The condition:
(7) m(t,a) ³ 0 for all t ³ 0 and all 0 £ a < 2p
is not restrictive at all, since the sign of the mass components m1(t,a) and m2(t,a) is still assured by the mass angle q(t,a) [for physicists now, this “mass angle” would be found in associated isospace-time, time-like component – B131, unitary group SU(3,1)]. This mass angle is now variable as well (it has no reason to remain constant). Variable! Meaning it can change... with time... (and time angle, but this is more abstract to us).
Does this mean that each mass component can now change sign?
That’s an interesting question.
Both m1(t,a) and m2(t,a) are oscillating, since:
(8) m1(t,a) = m(t,a)cosq(t,a) , m2(t,a) = m(t,a)sinq(t,a)
Let’s fix t = 0 to be the instant we start our observation. At this instant, q(0,a) = q0(a) still depends on a. So, it’s still variable. We need a stronger condition on a, say a = 0 set to be the time angle at which we begin observing. And let q0(0) = 0 for simplicity (it all starts at zero degree angles). Then, our initial masses are m1(0,0) = m(0,0) > 0 and m2(0,0) = 0 [an m(0,0) = 0 magnitude would have no interest at all]. Suppose q(t,a) increases. If its evolution is not bounded between 0 and p/2, then, when q(t,a) will become greater than p/2, m1(t,a) will change sign. Okay, we will then go from sector I to sector II, so it may happen that the “conversion” is no longer observable to a human observer.
The same will obviously hold for m2(t,a), when going into sectors III and IV. Anyway, the simple fact of substituting M(T) for its complex conjugate [M(T)]* = M*(T*), which is a function of T*, as a development in powers of T* immediately shows, suffices to reverse the sign of m2(t,a).
Can we really withdraw more substance than available?...
Yes, IF we count negative masses as positive anti-masses. We reduce the (positive) mass of a substantial body down to make it completely disappear (m = 0). We can no longer “pump” substance “out”, right? But, what we can do is to replace the vanished substance with anti-substance. And quantum physics actually says (and shows!) that this process is absolutely equivalent to keeping on pumping substance out of the vacuum! Equivalent too, but not feasible for as much. Hence the introduction of the concept of anti-matter by Dirac, to “fill” the (relativistic) holes left by matter in energy bands. Recall that this was allowed because the wavefunctions were complex-valued quantities with magnitude and phase.
Well, this is exactly what we have here with mass, charges, and everything, including space and time themselves!
Of course, if we “stick” to “our” 4-dimensional “sub-world”, then not only all phase angles will be everywhere constant, but set to zero and we’ll recover that m1(t,0) = m(t,0) > 0, m2(t,0) = 0 for all t.
Okay. Let’s turn to biophysics and see the implications of all this.
Supersymmetry not only says, it asserts, again, based on well-observed and reproduced evidences, that the true geometry of Nature is not real but complex. It means that “we have an observation problem”. Or, in other terms, “we’re not completely blind, but nearly”... we see rigid bodies where we should see oscillating bodies. We see substance on one side, anti-substance on the other. We don’t see one transforming into the other. We are subject to observational limits. Even in our particle accelerators, we are limited by the levels of energy our apparatus can deliver: if they’re not powerful enough, we have to wait for the next generation (if not too long in distance!) to expect observing something.
You take a biological body: this is “real” substance, it’s the tip of the iceberg.
You take biological fields: these are “real” radiation fields, again, tip of the iceberg.
We take absolutely no phase into account. We don’t see this is actually imbedded into a wavy world. What we observe and study are limited properties.
I don’t say it, supersymmetry says it!
I never asserted anything, by the way, I always looked at what physics said...
I have an animal body in front of me. I’m a state-(1) observer and so is that body. I assume he’s material. He first has a supersymmetric partner in state-(2) which is radiative, that is out of my reach, since my observations are “confined” to state-(1). According to an operation that is out of my reach too, they both have an anti-counterpart. And all this actually makes a single “super-body”... all the rest are mere transformations. What’s the true substance? I just can’t define, it all looks and sounds contradictory to me... to me, if I look at the physical laws I can observe in my real geometrical state-(1) “restricted world”, it should all disappear into light. It should all neutralize. Now, it’s not the case, or I wouldn’t be there to observe and the animal in front of me wouldn’t be there either. So, why can it be so? I don’t know. The only thing I can say is that “it’s all-in-one”.
I am in front of an oscillating substance in an oscillating universe, extremely primal, where “all is in one”: matter, radiation, antimatter, anti-radiation. It all transforms under one of these 4 forms and it’s none of them at the same time. That’s the best I can describe from where I stand. To me, it’s absurd physics...
This animal in front of me has a conscience, which is an electromagnetic process. This conscience has a supersymmetric partner in state-(2) which is a material plasma of photinos: a material substance! I have light propagating along neuron cells in state-(1), I have matter particles (photinos) propagating along “virtual neuron cells” in state-(2). My photons (the ones I observe) are massless, my photinos have non-zero mass. If supersymmetry is respected, these photinos should be massless as well, but in another “state of life”.
Assume all these masses remain constant in time. There still remains the time orientation angle... that I did not take into account in my observations... According to (8) above, I can still have a dependence:
(9) m1(a) = m(a)cosq(a) , m2(a) = m(a)sinq(a)
I would have to fix my time angle a to get a fix mass angle q. That would require very restrictive physical laws... so restrictive, actually, I would have to justify them... understand: there’s nothing natural in these restrictions. It’s so unnatural that, should if set a = 0, i would fix my time arrow to t1 = t > 0, t2 = 0 and, if I set a = p, I would fix my time arrow to t1 = -t < 0, t2 = 0: in the first case, I would be unable to define a “past” in state-(1) and, in the second case, a “future” (as I would still less have complex conjugation, I wouldn’t be able to use it to reverse my time arrow...). THAT is weird... :)
Relations (9) should be clear enough by now and (8) even more: beginning with m1 and m2 both positive, I can end in many ways. Nine, to be precise (3²):
(m1 > 0 , m2 > 0) , (m1 > 0 , m2 = 0) , (m1 > 0 , m2 < 0)
(m1 = 0 , m2 > 0) , (m1 = 0 , m2 = 0) , (m1 = 0 , m2 < 0)
(m1 < 0 , m2 > 0) , (m1 < 0 , m2 = 0) , (m1 < 0 , m2 < 0)
Physical interpretations should now be easy...
(m1 = 0 , m2 = 0) in particular = NO SUBSTANCE. NOTHING REMAINING. Mere “quantum super-light”. Or super-substantial vacuum.
The Bible says that “God made us His image”. For long, it made me blink... However, if our true physical reality is to be supersymmetric entities, then, we’re all of the same original nature and it fits with the Bible’s assertion. Recall that, in the Old Testimony, nothing is referred to as a “Universal Evil”: God grants and punishes. He’s at the same time “positive for those who build and create”, “negative for those who destroy”. This, again, fits much better with supersymmetric physics. What we call the “Devil” didn’t even exist in St John’s Apocalypse: he only mentioned the “beast”, who actually referred to Caesar Neron. That “dichotomy”, that “split” between the “Essentially Positive” and the “Essentially Negative” was done much later, in the Middle Age: then came this idea of a “Universal Evil” aimed at destroying and punishing everything and everybody. That notion of a “Creator” on one side and a “Destroyer” or “Annihilator” on the other, who “fell from Heaven”.
This is not very consistent with physics.
What’s consistent with physics is that ability to turn evil or turn good. Change polarities.
What’s also consistent with physics is the “Judgement of Souls”. Assuming we consider as “souls” a whole supersymmetric body. It does not matter if the “biological projection” into state-(1) or state-(2) ceases functioning, we saw in B132 that this actually has no incidence whatsoever on the supersymmetric body, because it had assimilated that from the beginning (IF such a notion of a “beginning” and an “end” can still be given meaning). We now see that, in addition, that “biological mass” measuring the amount of biological substance (consciousness included) in an animal body can well fall down to zero (which takes a long time...), it only transforms the supersymmetric body. But it does not change it for something else, since all these transformations are also “coded” inside it! The only think that can change is m1 = m2 = 0, that is, M = 0 permanently.
Now, mix all these polarization possibilities with what monotheist religions say and make your own deductions. The way your conscience drives your acts, etc.
Commentaires textes : Écrire