doclabidouille
B 136: ON A NEW MODEL FOR MIND
Le 19/11/2017
This bidouille, again, for a large public (or, at least, for the public who’s NOT AT LARGE… or not yet… :) ).
For what follows, I’m basing myself on what neurobiologists tell us. Let’s sum it up again.
Each neuron cell taken individually inside a highly-sophisticated system as the brain of mammals receives an average of 10,000 connections from other neurons, not necessarily close to it. Changeux claims it endows the soma of the cell with “a combinatorics of signals”. I disagree. Completely. Why? Because this addition of signals is then compared to a threshold, generating a “trigger effect” and the output, by the end of the axon, will ultimately be a binary (“0”: silent, “1”: active). You find exactly the same kind of dynamics inside inert medias like silicium, silicium / manganese, etc. that are used in the computer-making industry, it’s known as the “transistor effect” and it leads to no arithmetic function at all in the device…
So, if you look at possible arithmetic functions inside the neuron, you may be deceived…
Following this, Changeux precises, and this is very important, that, in most of the neurons composing the central nervous system (CNS – the brain), the synaptic cleft between two neurons is so small that only one pack of neurotransmitter can go through and, the crucial point is here: not systematically, even when the emitting neuron is active.
So, again, better not rely on the neuron in itself to forward information… This is good for much larger synaptic clefts such as, he explains, that between an axon of the motor system and a muscle, where some 300 packs of neurotransmitter can be scattered at the same time, guaranteeing a 100% transmission.
Conclusion: the more packs to be scattered, the higher the chance to vehicle information from an emitting neuron to a receiving one or, equivalently, the closer to 1 the “transmission coefficient T” of the cleft (to use an analogy with optics).
Unfortunately, this conclusion leaves the question of cerebral neurons wide open…
Basically, the isolated neuron can work 3 ways. It can be stimulated from the outside and, if that stimulation is higher than the trigger threshold value, the neuron responds. It can self-stimulate, thanks to its calcium channels. Or it can remain silent. But, whatever its reaction, the release of neurotransmitter is not 100% guaranteed in most situations.
Clearly, the neuron by itself (and I insist on this) cannot be kept as a “serious enough partner” for signal propagation and still less considered as an “arithmetic unit”.
Clearly, there must be another mechanism that improves signal transfer and “reinforces synapses”. Neurobiologists have now been knowing the complete dynamics of the neuron from soma receptors down to the very ends of their axon for 30-40 years and they still stumble on how to link it with the production of “mental objects” (percepts, memory images and concepts). I pointed out many times the way the cell works is by no means causal. So, two main hypothesis have been proposed in order to palliate this “little inconvenient”. The first one is to model the functioning of the brain giving inter-neuron transmission a “probability of occurrence” and arguing the machinery would be Bayesian (from Bayes, who established rules on probabilities for connected events). However, as I said, this would be equivalent to allowing “pieces of signals” whereas signal is transmitted as a whole or isn’t. One more difference between mathematics and physics. The second hypothesis was Edelman’s “neural groups”, where populations of interconnected neurons of various numbers would collectively respond to an excitation on any of a single member of the group, through a global mechanism of “consistent resonance”. Changeux wasn’t convinced, as datas also showed that there isn’t any static organization of any kind inside the CNS; instead, everything is dynamical and configurations change all the time. Besides, Edelman agrees in that a given instruction can be forwarded through different networks, as long as it’s forwarded and leads to the same result(s). Changeux sees this ability, that “plasticity” (or adaptation faculty) of the brain as a result of a “jungle” of connections rather than specified or dedicated units like in computers. Edelman too is strongly against comparing the animal brain to any Turing-Von Neumann sequential machine, they both say it doesn’t match observational datas at all. The difference between them is that Edelman bases himself on the existence of “neural groups” to define his “noetic” machines and even build prototypes. The thing is: even the first 2 prototypes already reasoned closer to the animal brain than to a T-VN machine!
It’s therefore very hard to decide which way, which representation, is the best one and the closest to reality, as they all show their inconvenients but also their advantages…
I’m a very basic guy, probably one of the most basic you’ll find, so I always end in going back to the very bases.
And the base is: we have two neurons and, between them, a certain type of a neurotransmitter. One of the neurons is the emitter, the other one is the receiver.
Questions: which ones are the sources and which ones are the mediators?
Answers: sources are neuron cells, mediators are neurotransmitters.
Question: what could mind be made of, then? Neurons? No: neurons make the substrate.
Conclusion?
Mind would be made as a field of neurotransmitters.
Indeed, what makes the mental process? Is it the biological substrate, which produces the signal with no certainty and no causality at all? Or isn’t it rather the transmission of information from one unit of that substrate to another?
Everywhere in Nature, you have “supports” of information and transmission. Saying mind would be made of neurons would be equivalent to saying that electrons make the electro-magnetic interaction or that masses make the gravitational one, quarks the strong nuclear one, etc. It would be confusing the sources with the vehicles.
That mind is a biochemical process is now beyond all doubt. But, if we search for an understanding in the internal mechanics of the neuron, we just find nothing consistent enough able to build mental objects.
The vehicle of information in computers is the electric current. It’s made of moving electrons. Is the internal dynamics of transistors for anything in this? No: what’s relevant is what we have as inputs and what we get as an output, period. When we create programs, in order to run machines, we don’t care about what goes on inside transistors, we take input and output bits and we combine them in order to first make basic instructions, then instructions, then programs. We use the vehicles of information, not the substrate. The substrate is there to produce information. Now, mind is information and this kind of information is only chemical, molecular. As between any non-neural cell of a living organism: two living cells communicate exchanging molecules.
Now, if we base ourselves on the “neural jungle”, there’s potentially no way to build consistent patterns. In order to do so, we need structure, consistency and stability. We need an organization, should it only be ephemeral and changing with time. Stability becomes a necessity for memorization, especially long-term.
Well, in all physical systems, such properties are only accessible to non-linearities and feedbacks.
So, maybe we’d rather look at the feedbacks of the field of neurotransmitters onto the output of neurons, because only there is transmission occurring.
What would be the basic requirement for mental processes to perform?
That information be suitably forwarded from one neuron to the other.
According to what we saw, this requires an optimality criterion, namely, that the transmission factor T between two given neurons reaches 1. That’s 100% chance of transmission. If we reach it, that “path” is “secured”. If we want to change path, we favor another transmission factor, somewhere else, and decrease the previous one.
This is nothing but a regulating process and, as Changeux defined it, consciousness is that process which regulates mind.
So, what we get here is mind, now realized as a molecular field of neurotransmitters of various types, and a regulation process answering an optimality criterion, which helps dynamically structuring mind and we call consciousness. Patterns change with time, but for memory images, which remain stable much longer. Such “long-living” patterns correspond to fixed points in dynamical systems: the state they were “in the last round” remains unchanged “in the new one”. And this, for a certain duration, that can last all life.
Let’s sum it up once more.
A neuron produces a type of neurotransmitter with only the probability T. That pack of neurotransmitters is then received by another neuron (up to possible leaks): information is transferred. The neurotransmitter is then destroyed. During molecular transfer, there’s a “quantum” of “mental information” produced. This is local. Globally now, or “less locally”, there’s a set of such “quanta”, of various types, making mind at a given time t. That structure, in turn, acts upon the synapses to reinforce their biological reactivity and, therefore, locally increase the transmission coefficient T. The next round, the same neuron will produce its neurotransmitters with a higher T. Again, mind will retro-act on its synapses until T reaches 1. However, that process is spatial: it concerns a synapse located at some point “x” of the brain. There still remains the possibility of a change in time. Changing neurons, we change network configurations (while neurons, of course, don’t move). Patterns can change shape.
What gave me this idea is first the arguments I exposed hereabove and, second, the analogy I checked with 20th-century so-called “semi-classical phenomenological models” of interacting particle physics. Typically, you have a source field and a field for interaction. The source field is made of particles which produce “quanta” of interaction. What happens is that the source and the interaction it produces strongly couple. If you have a system of electric charges carried by electrons, for instance, and these electrons produce an electromagnetic field between them, this field, as long as it stays inside the system of charges (inside the electronic source field) then retro-acts onto these charges, modifying its dynamics. And so on, until an equilibrium is found. It can be mechanical or thermodynamical. When it cannot be found, we’re in a situation of chaos. And there too, there are very interesting patterns.
I think we’d rather explore this road instead of the one consisting in believing the neuron cell, because of its axon, would serve as a “wire” to transmit information. Nothing consistent (I don’t even say “logic” or “rational”, I only say “consistent”) can get out of this. What may fool us is that the transistor, which is an inert object, has a deterministic functioning: according to its internal structure, the minerals used, the inputs, it will deliver “0” (blocked) or “1” (saturated). So, maybe the partisans of Bayesian logic (or any other fuzzy logic) would think the neuron “transmit the nervous signal with a proba of T”. It doesn’t seem to do so. It rather seems to get inputs, deliver an output, a non-determinsitic way, because it’s a living cell… :)
- Commentaires textes : Écrire
B 135: BACK TO THE SOURCES OF THE QUANTUM
Le 06/11/2017
One might oppose me that the unexpected result in B134 could always be re-established normal by using the modulo 2pi cyclic property. However, the result would still remain unclear, as it would be equivalent to multiplying by 1… :(
Here’s an article that is made for the largest public possible. Specialists will find unavoidable repetitions in it, but non-technicians are everything but familiar with the highly-sophisticated technical developments which led to the present 21st-century approach of quantum physics.
From the very first discoveries of atomic processes by the end of the 19th century to the most synthetic models of quantum theory proposed in the late 20th century, the fundamental idea that quantum processes were genuinely wavy, i.e. oscillating, drove all the developments for more than a 100 years. It culminated by the end of the 1960s with “supersymmetric” models. These models were aimed at trying to unify matter and radiations at the level of “elementary” (i.e. non-composite) particles, but the principles upon which they were built didn’t restrict at all to the sub-nuclear level of description. I then decided to extend them, not only to much larger bodies with a much more complicated structure, but to everything, to begin with that “wave-corpuscle” duality. This is nothing else but Schrödinger’s “doctrine”, which says that absolutely all physical systems in Nature are to be endowed with a natural quantum structure. Supersymmetry only showed that the Schrödinger representation of the world was actually equivalent to doubling everything the classical approach described earlier. So, I’m inventing nothing, introducing nothing “revolutionary”, I’m just following the masters.
Basically, all these supersymmetric models which, I’d like to insist on that point, didn’t get “out of theoreticians’ imagination”, but instead, were a direct consequence of an accumulation of observational evidences in particle accelerators all along the 20th century, all these models were built on the assumption that the deep physical reality is oscillating: everything in Nature naturally oscillates, down to space and time themselves, so that today’s question is no longer about “what are the physical mechanisms that damp these oscillations at scales higher than the sub-nuclear one?”, but “why don’t we directly observe and feel these oscillations in our current life?”. Surely, consciousness has something to do with this and we’re sent back to that central idea in quantum theory of that “interaction between the human observer and his surrounding environment”. But, this is not the subject of the present article.
The subject of the present article is to start from that conclusion that, in order to oscillate, all physical objects, events and phenomena in Nature needs be doubled. It just cannot work otherwise. If we refuse to double things, then we conflict with observational evidences: it’s as simple as that. We don’t do this because it “suits” us, but because experimental facts impose it to us. Science is everything but speculation, it’s, on the contrary, perpetual deduction. So, let’s explain how it works. We’ll have no choice but to do a little bit of elementary math, but everything will be explained step by step in detail.
Let’s begin with a practical example that will also set a bit of terminology.
Let x(0) designate a “pure distance” between two objects or between the observer we are and an object we want to observe. “Pure” means “absolute”, that is, “unsigned”. By convention, an unsigned quantity is always positive or zero. There’s nothing absurd at all in demanding this to x(0): don’t we usually measure 1 meter and not -1 meter?
Let’s now associate and angle ksi to x(0). ksi is a quantity that stands between 0 and 2pi radians (or 360°, that’s a complete tour around the unit-radius circle), so that, every time we had 2pi, we make a complete tour and we retrieve our original ksi (same operture). Because of this cyclic property of angles, they can be given any value in the continuum, that value can always be brought back to the interval [0,2pi] “up to a certain discrete number of tours”.
We therefore starts with this pair [x(0),ksi] and we call it a quantum distance in polar representation. We then call for trigonometric functions, which are built as continuous functions on the unit-radius circle and we define the “first (or “horizontal”) projection”:
(1) x1(ksi) = x(0)cos(ksi) (cos = cosine)
and the “second (or “vertical”) projection”,
(2) x2(ksi) = x(0)sin(ksi) (sin = sine)
Both obviously depend on the angle ksi. In order to understand something to what may happen, we have to be very methodical. We call:
- ksi, the quantum state in which our quantum distance x(ksi) = [x(0),ksi] is found;
- x(ksi) = [x1(ksi),x2(ksi)], the very same quantum distance, but now in planar (or “Cartesian”) representation;
- x1(ksi), the “corpuscular-like” distance (or simply “distance”);
- and x2(ksi), the “wavy-like” distance or wavelength.
Why this terminology? When the experimenter is going to evaluate that quantum distance x(ksi), he/she’s going to run two (series of) experiments. The first one is aimed at revealing the corpuscular behavior of that distance. Namely, the experimenter wants to put the light on the “little hard balls” that would serve as “solid vehicles” of space. He/she wants to exhibit the “granular structure of space”. A measure of x1(ksi) will give him/her this information. The second (series of) experiment(s) is aimed at revealing the wavy nature of space. This time, he/she sees space as a continuum or as a “signal” and x2(ksi) will give him/her this second information. From these two complementary informations, he/she’ll be able to deduce the quantum distance they’re searching for:
(3) [x(0)]² = [x1(ksi)]² + [x2(ksi)]² (Pythagoras’ triangle)
will give the pure distance x(0), while
(4) ksi = Arctan[x2(ksi)/x1(ksi)] (Arctan = Arc tangente)
will give the (main determination of) quantum state. You’ll notice x(0) no longer depends on ksi. This is because of the fundamental trigonometric relation cos² + sin² = 1.
So, what our experimenter actually do when he/she does his/her measurements is: he/she projects the observed quantum “entity” (here, a distance) onto a “corpuscular axis” and a “wavy axis”. If he/she works in more than 1 dimension, each of these axis become a space if not a space-time (with the same number of physical dimensions).
Yet, we forgot something important. We forgot that our experimenter is actually a human being and, as such, behaves as if he/she was a physical entity of the first projection only. This is because we see us as “mostly if not entirely substantial” and, as “substantial” rimes with “corpuscular”, we quite naturally place ourselves in the “corpuscular projection”. However, this is not what quantum theory tells us. Quantum theory says we should take into account the existence of a “double”, we, as “corpuscular observers” see as “wavy” and who stands in the second projection. But then, if we follow quantum theory and substitute the roles, this “wavy” observer, that “double” of ours, considers him/herself in turn as “substantial” as we consider ourselves in his/her own space(-time) and now sees us as “wavy doubles”. There’s a necessary reciprocity in the way things are interpreted by both observers, because this is merely a question of perception, but the quantum reality isn’t this: the quantum reality says
There exist a single entity, it’s neither “substantial” nor “etheric”, it’s “quantum” and it represents a brand new form of existence with no familiar equivalent.
Here’s what quantum theory says. It only remind us that, realizing experiments on the “corpuscular” or the “wavy” behavior of quantum objects is only aimed at reducing a reality we hardly grasp to more familiar behaviors, accessible to our perception.
There’s a widely used quantity to describe waves, it’s called the “wave number” k and it’s defined as the inverse of the wavelength multiplied by 2pi radians. These wave numbers are going to help us better visualize that complementary between our two observers’ perceptions.
We take our “corpuscular-like P1-observer” first, that’s us in common life. As we saw, he/she perceives the distance x1(ksi) as “corpuscular” because it stands in the same space(-time) as he/she. As he/she perceives x2(ksi) as a “wavelength”, he/she will associate it with a wave number, caution: in his/her space. That’s a k1(kappa): k1, referring to P1! So, he/she’ll write:
(5) k1(kappa) = 2pi/x2(ksi)
Our “wavy-like P2-observer” will react the same in his/her space(-time): he/she’ll now see x2(ksi) as “corpuscular” and x1(ksi) as “wavy”, therefore associating a wave number:
(6) k2(kappa) = 2pi/x1(ksi)
Indeed, the sine function can always be identified with a cosine one and (2) also writes:
(7) x2(ksi) = x(0)cos(ksi – pi/2)
which corresponds to a “corpuscular distance” delayed a ¼ of a tour on the unit-radius circle. Conversely, x1(ksi) can always be identified with a “wavy distance” advanced a ¼ of a tour:
(8) x1(ksi) = x(0)sin(ksi + pi/2)
and this 90° shift precisely corresponds to exchanging P1 and P2… so, you see the two are really complementary to one another and the distinction between “corpuscular” and “wavy” behavior is, in the quantum world, only a matter of perception…
Why introducing a different quantum state for wave numbers? Because:
(9) k(0) = (2pi){1/[x2(ksi)]² + 1/[x1(ksi)]²}1/2 = 2pix(0)/|x1(ksi)x2(ksi)|
= 2pi/x(0)|sin(2ksi)|
(10) kappa = -ksi
kappa is opposite to ksi.
Let’s now examine some particular values of ksi.
When ksi = 0, x1(0) = x(0) and x2(0) = 0: a P1-observer will perceive x(ksi) as “entirely corpuscular” and “ahead of him/her”. A P2-observer will perceive it as “entirely wavy”. Notice that x(0) is also the maximal distance both projections can reach, as sine and cosine functions stay between -1 and +1.
When ksi = pi/2 (90°), x1(pi/2) = 0 and x2(pi/2) = x(0): roles are permuted; a P1-observer will perceive x(ksi) as “entirely wavy”; a P2-observer, as “entirely corpuscular”.
When ksi = pi (180°), x1(pi) = -x(0) and x2(pi) = 0: same as for ksi = 0, except that x(ksi) is perceived behind these observers.
Finally, when ksi = 3pi/2 (270°), x1(3pi/2) = 0 and x2(3pi/2) = -x(0): same as ksi = pi/2, except for x(ksi) standing behind.
All other values of ksi are quantum, as they mix both projections.
We can proceed the same for anything else but space. It applies to time, masses, etc.
Quantum states, offering additional degrees of freedom, open onto new physical dimensions.
Basically, we have four “sectors”:
- sector I, ksi is between 0 and pi/2, x1(ksi) and x2(ksi) are both positively-counted (both “ahead of observers”);
- sector II, ksi is between pi/2 and pi, x1(ksi) is negatively-counted (“behind”) while x2(ksi) is still positively-counted (“ahead”);
- sector III, ksi is between pi and 3pi/2, x1(ksi) and x2(ksi) are both negatively-counted (both “behind”);
- and sector IV, ksi is between 3pi/2 and 2pi, x1(ksi) is positively-counted (“ahead”) again while x2(ksi) is still negatively-counted (“behind”).
When applied to something like space, it brings nothing we aren’t familiar with. When applied to time, first we find that concept of a “corpuscular” time made of (still hypothetical) particles we could name “chronons” and that concept of a “wavy time” that still doesn’t speak a lot to us. Let’s t(tau) be quantum time. Then:
- in sector I, both t1(tau) and t2(tau) point towards the “future”;
- in sector II, t1(tau) points towards the “past”, while t2(tau) still points toward the “future”;
- in sector III, both t1(tau) and t2(tau) point towards the “past”;
- in sector IV, t1(tau) points towards the “future” again, while t2(tau) still points toward the “past”.
So, we have this “alternance” between “future” and “past”, while “present” corresponds to tau = pi/2 or 3pi/2 from a P1-observer’s perspective [t1(pi/2) = t1(3pi/2) = 0] and to tau = 0 or pi from a P2-observer’s perspective [t2(0) = t2(pi) = 0]. But, as always, these are mere perceptions: in the quantum, there is no such thing as “past”, “present” or “future”. How could there be, if one is free to go “back to the future”?... :)
When applied to mass, it turns real weird for the experimenter. Let’s m(mu) be a quantum mass. Then:
- sector I predicts m1(mu) and m2(mu) will both be positively-counted. A 20th-century experimenter would have interpreted this as a “particle of matter”;
- sector II predicts m1(mu) negatively-counted while m2(mu) remains positively-counted;
- sector III predicts m1(mu) and m2(mu) will both be negatively-counted. Our 20th-century experimenter would have interpreted this as a “particle of antimatter”;
- finally, sector IV predicts m1(mu) positively-counted again while m2(mu) remains negatively-counted.
Our 20th-century observer, may he/she “belong to” P1 or P2 would have been for sure completely disoriented with sectors II and IV, because it just didn’t match his/her belief. For him/her, a quantum particle had either negative or positive energy at rest (which is equivalent to mass through the Einstein relation E = mc²), but whatever its sign, it would have concerned both the “corpuscular” and the “wavy” components. Now, to my knowledge, there is no selection rules yet to assert that both projections should have same sign… and, anyway, this is again a false problem, because the mass of a quantum particle at rest is the “pure mass at rest” m(0), which is always a non-negative quantity. So, the quantum principle applied to mass now tells us nothing else but this:
Opposite to our perceptions of things, there’s nothing in the quantum world as “antimatter”, i.e. “matter with negative energy at rest”. Instead, there is quantum matter with mass at rest m(0) in a quantum state mu.
And, according to the sector that quantum state is found, we interpret the mass components as being “signed”. However, there’s no “sign” in the quantum, there’s a position on the circle.
And this is mathematically proven: if you can attribute a definite sign to a single quantity, how can you to a pair of such quantities? For a number, you have two possible combinations: +x or -x; for a pair (x,y), you have four: (+x,+y), (+x,-y), (-x,+y) and (-x,-y). Only the first and fourth ones can be attributed a definite sign, because that sign is common to both components. But, what about the two others? You can’t…
On the contrary, depending on the position (the “angular operture”) you’ll occupy on the circle, you can immediately generate all four combinations… J
So, unless there comes selection rules to forbid (+,-) and (-,+) combinations, we can’t exclude them. Now, I emit serious doubts about the existence of such rules, because they would “spoil” the very definition of a quantum mass. And why would they apply to mass and not to space, to time and to anything else, then?... :|
I’d suggest a better explanation. PAM Dirac himself wasn’t satisfied with his own introduction of “particles with negative energies”, he preferred to see them as “anti-particles with positive energies”, drawing an analogy with solid-state physics, where “holes” in the “energy sea” replaced particles with positive energies. At that time (1920s), it was still assumed that free particles had to have positive energies or, at least, zero. States with negative energies were attributed to linked systems. And, as the wave-corpuscle duality was proposed precisely because one couldn’t separate the corpuscle from the wave anymore, then, all logically, people assumed that, if a component had a sign, the other one should carry the same. But keep in mind this was in a non-oscillating space, non-oscillating time. Making everything oscillates changes the entire picture… We don’t need to struggle with “particles” and “anti-particles” anymore, we now understand better why projections are not reality at all, but severe reductions of it,… We work in a radically transformed frame… We can see there’s no objection to having a “corpuscle” with positive energy and a “wave” with negative energy: the two do not interact with each other, there’s no “two”, there’s one and that one is just allowed to carry two signs instead of a single one…
Should this shock the community? I don’t think so. After all, gluons carry two colors… :|
All experiments throughout the past century were made out of the calculations from a 4D space-time. Surely, an 8D one should lead to radically different results…
So, maybe we didn’t discover quantum particles in mass sectors II and IV because we didn’t search for them… because our experiments were based on assumptions that quantum particles should only belong to sector I or III… because our theoretical models all founded on mirror symmetry… and from the time you impose a quantity like energy to remain a real number and not a pair of real numbers… well, you necessarily limit your possibilities…
Next time, I’ll talk about areas and volumes.
- Commentaires textes : Écrire
B134: A HUGE CONSTRUCTION PB WITH C...
Le 28/10/2017
After being absent of the blog for a VERY long time, but not without actively researching for as much, I’m back with a HUGE problem on complex numbers and that’s why I’d like to be back to the old “bidouille” B32 [inside which you’ll be kind enough to replace, in eq (20), T’-+- with -T-+-]. Here’s indeed a reasoning that leads to a stunning contradiction within the structure itself of the algebraic field C.
If we combine the cyclic properties of the imaginary unit i:
(1) i0 = 1 , i1 = i , i2 = -1 , i3 = -i , i4+k = i4ik = ik
for all non-negative integers k, with the relation out of the de Moivre formula,
(2) i = eipi/2
We’re led to:
(3) pi = 0…
which is obviously absurd. Let’s prove it straight away.
Eqs (1) shows a base-4 cycle on the integer powers of i. (1a) is not only a convention, it’s also proven by the behavior of power functions near the value zero of the argument. (1b) is the very definition of i as the “imaginary unit” within the field C. (1c) is Cardan’s original definition. Finally, (1d) is the combination of (1a and b), which also defines the complex conjugate of i.
Eq (2), again, really looks like a wonderful formula, linking the only two known irrationals e and pi to i. So, let us now use that last relation to compute ii:
(4) ii = (eipi/2)i = ei²pi/2 = e-pi/2
The result is real-valued and, apparently, irrational. The fact that it’s real-valued is already quite surprising, from a conceptual viewpoint. But, anyway. Let’s take the square of (4):
(5) (ii)² = i2i = (i²)i = (-1)i = e-pi
The problem arises with the square of that square:
(6) (ii)4 = i4i = (i4)i = (+1)i = e-2pi
Let’s point out that we only used here the algebraic properties of the elevation to a power. Now, (+1) elevated to any power, real or complex-valued, should, by definition, give +1. So, we should have:
(7) (+1)i = +1
immediately inducing (3) after taking the Neperian logarithm of (6).
Search for the error… I did, and didn’t find any… This totally absurd result is not even due to the cyclic property (1e) or only partially. The true responsible is actually the rule of signs on the multiplication table for real numbers:
(8) (-1)(+1) = i²i0 = i² = (-1) , (-1)² = i²i² = i4 = i0 = +1 , (+1)² = i0i0 = +1
which perfectly works when extended to the imaginary unit.
In any case, cyclic properties (1), which are formally equivalent to a modulo 4, more than clearly show that i should now be taken as a more fundamental unit than +1 and -1, since both can be generated as integral powers of i. Furthermore, cyclicity offers an infinite countable way of obtaining -1 and +1. In R, we only had a modulo 2 cycle:
(9) (-1)0 = +1 , (-1)1 = -1
And, because C was built as an algebraic field, it’s both a unifere and integer ring, meaning all its non-zero elements are invertible and it has a unit one. Which should be?... +1, -1, i or 1 + i? The inverse of i is defined (and obtained) as:
(10) i-1 = i* = -i = i3
because of (1e) for k = 0. It’s also confirmed by (2).
Get me right: if there had been any contradiction in the algebraic laws upon which C was built, it would have immediately been detected (and corrected). What leads to the absurd result (3) is actually a consequence of the construction of that algebraic field basing oneself on the choice of a unit number: C was actually built mixing real numbers with new ones, the square of which were allowed to be negative, so that C appeared as an algebraic extension of R. But we still need to know which element is to play the role of a unit… K If it’s i, since it looks more fundamental than +1 and -1, then we should find i² = i and i-1 = i, by the very definition of a unit element…
SO… folks… should we REALLY forget about “complex” numbers?... I’m just wondering about this, because we have CONCRETE (and no longer “imaginary”…) CONSTRUCTION PROBLEMS…
And I discovered that one only because I was after a physical significance of i. Before quantum theory, i was used as a mere convenient math tool to easy calculations but, ultimately, classical waves appeared as real-valued trigo functions: cos, sin, tan,… Since quantum theory and the de Broglie “wave-corpuscle duality”, it little by little became obvious that we could no longer satisfy ourselves with the “real-part” of a math wave, but that we really had to take both the real and the “imaginary” parts into account and on an equal footing. This, because quantum waves behave like refraction index: they have a “refraction” component and a “reflection” one, and you just can’t neglect the second one. This was going in the direction of i acquiring a physical content, as a “fundamental quantum unit”, namely [see (2) once again], as a quantum wave with unit amplitude and constant phase (or “quantum state”) pi/2. There was no classical equivalent to that wave, since it’s real component is zero…
Unfortunately, there are obvious contradictions within the construction of C that does not allow me to go further in that direction.
Maybe I’ll need to go back to B32. Maybe I’ll instead have to change from R to R². After all, this is the same as “doubling R” and it’s even much cleaner than identifying C with R². And use M2(R), where units are correctly defined, for operator ring acting upon 2-component vectors of R².
However, the distinction between “classical” and “quantum” will need to be reviewed, because it will then be far less obvious than when introducing an “imaginary unit”.
- Commentaires textes : Écrire
B133: BACK TO THE "POLARITIES" PROBLEM
Le 06/01/2017
BACK TO THE “POLARITIES” PROBLEM
I’ve studied many times before what I called the problem of “polarities”, in a different sense than the one usually refers to in particle physics (and related to intrinsic rotational momentum of particles – or “spin”). What I mean by “polarities” here refers to the sign of masses or, equivalently, of energies.
Mass is not to be confused with substance: it’s only a property of substance (as is the electric charge or more complicated charges). Besides, we say that a physical body “carries” a mass.
However, the type of substance will set the sign of its mass (opposite to all other charges) and, reciprocally, knowing the sign of a mass will determine the kind of substance we’re dealing with. It’s usually assumed (one more convention) that the sign of “substance” (matter or radiation) is positive or zero, while the sign of “anti-substance” (antimatter or anti-radiation) is negative or zero.
Galilean physics (space relativity, universal time) assumes that all masses are strictly positive and, indeed, human-scale or cosmological-scale antimatter has never been observed so far. This does not mean for as much there isn’t in our observable universe, but this is now assumed to be very unlikely for, if there was, there would unavoidably be interactions between that antimatter and nearby matter, so that huge radiative jets would be detected with today’s equipment.
This “no see” fact is, after all, rather logical. If we’re fair enough, at the best, only fundamental particles can be considered as keeping a constant mass (off interactions, of course, in which case, there are transformations). All other bodies have a more or less variable mass. Mass varies in time simply because there are no closed systems (apart, maybe, from the whole universe itself), so that physical systems exchange with their surrounding environment and this results, in particular, in substance transfer: you feed, you gain mass (in our approximately constant earth gravity field, this can be seen as equivalently saying we gain weight); you starve, you loose mass (or weight). According to the direction the substance current points towards, substance is brought to a system (from the outside environment into the system) or leaves that system (from the system into the outside environment).
It’s common sense that, given a system with initial mass m(0) > 0, if this system keeps on loosing substance in time, there will be an instant, say tf (“f” for “final”) when there’s no more substance inside the system’s volume: m(tf) = 0. It’s then perfectly logical (and fully observable too) to consider there’s no system anymore... So, it would be difficult to take any more substance out of a substance-free volume... That’s the main reason and justification why mass, in Galilean physics, is set to always be strictly positive, whatever its evolution in time: “you can’t loose more than what you have”.
Space-time relativity (Galileo + relative time) does not change this vision of things, despite the famous energy relation E² = p²c² + E0² with E0 = m0c² the “energy at rest” of a physical system (m0 its mass at rest and c, the velocity of light), enables both signs:
(1) E² = p²c² + E0² => E± = ±(p²c² + E0²)1/2
(and diametrically opposite too) so that, if you set the momentum p to zero (in which case, your system is fixed in space), then (1) gives you:
(2) E± = ±(E0²)1/2 = ±|E0| = ±|m0|c²
Absolute values of the energy and mass at rest, up to the sign. These are the mathematically allowed solutions to the quadratic (“power 2”) relation (1). Classical spacetime relativity keeps only the (+) solution, still sticking to Galileo’s frame.
Quantum physics allows negative masses, because their justification is now based on a different assumption. The quantization process takes the energy and momentum of a classical system and puts them into the phase of a “wavefunction”, a process known in mathematics as “exponential lifting” (I shall not elaborate here, it’s in all refs online – and extensively on this blog as well - !). As quantum physicists use signals and signal couplings (to describe particle interactions), there’s no more “physical obstruction to the law of common sense” in changing the sign of the phase: both signs are perfectly observed and the same way. As a result, if you keep the same orientation for your space and time, then a change of sign on the phase will change the sign of your momentum (space related) and energy (time related).
Should I precise here that we’re actually talking, from the beginning, of free systems.
The problem on the sign of mass (or energy) concerns free (i.e. non interacting) systems.
In linked systems of bodies, a negative energy of the system is accepted (and observed!) as soon as Galileo’s space relativity. It’s even what characterizes linked systems: that, to set components free (to “dismantle the system”), you need to bring positive energy to that system, from the outside.
The dilemma was on free systems. Classical physics did not observe them. The best spacetime relativity brought was to accept fundamental waves like electromagnetic or gravitational ones as massless substances. “Immaterial”. It allowed the possibility that m0 = 0, under the (less than “mathematically correct”) condition that such substances “moves” through space at the speed of light. It holds because, so far, no massive bodies have been found to move at c (as Heisenberg said of the Planck constant h and quantum “wave mechanics”, “it holds because that’s what we observe”... – at least, it had the merit of being honest, recognizing the fact that nobody could explain why it was so – which is still the case).
However, all this (Galileo, Einstein, quantum physics) was developed (because observed) in a 4-dimensional real geometrical frame.
It no longer holds in a 4-dimensional complex geometrical frame... even if we assume the “hermitian” hypothesis, which is the (very rude) mathematical translation for “mirror symmetry” (the name comes from mathematician Hermite). Usually, theoretical physicists continue assuming real-valued masses. But it’s no longer a pre-requisite, even under super-symmetry, where the masses of partners are equal.
Let’s review once again the connections between dynamics and geometry. It goes back up to the late 18th century – early 19th, when people began to use the geometrical tool to describe dynamics. As geometry was quickly progressing, they found it useful to “code” dynamics into geometrical terms. So, they intensively began developing a multitude of spaces with geometries suitable for the type of dynamics they were studying. The result was called “analytical mechanics” and received most of its contributions from people like Lagrange, Hamilton and Jacobi, all along the 19th century. They based themselves on works from geometricians like Gauss, Riemann and (much later, in the second half of the 20th century) Grassmann. Jordan, not-the-NBA-but-mathematician-trained superstar turned physicist (shame!... what do I say? HERESIA!), brought matrix theory to the quantum formalism in the late 1920s, following 1926 Heisenberg’s description.
Little by little, they came to the following correspondences:
Riemann’s geometrical axiom <-> commutative geometry [xy = yx, see B131, formula (1) or (2)] <-> spaces with a symmetric metric <-> radiations (Bose-Einstein stat, integer spins);
Grassmann’s geometrical axiom <-> anti-commutative geometry [xy = -yx, B131, formulas (3) or (4)] or “projective” geometry (because related with projections onto planes rather than axis) <-> spaces with a skew-symmetric metric <-> matter (Fermi-Dirac stat, half-integer spins);
Kähler’s geometrical axiom, a synthesis of the preceding two, with “enough good regularity conditions” (“smooth” spacetimes or “continua”) <-> complex geometry <-> spaces with a hermitian metric <-> supersymmetry between matter and radiation.
It’s worth noticing that Grassman’s geometry was required in connection with the so-called “phase spaces” of analytical mechanics. Typically, these are spaces where the complete “classical” motion of a given body or system of bodies is well described in geometrical terms. To describe such motions, we need to determine, following “classical” laws of motion, both the position of a system in space (or spacetime) and its “quantity of motion” (momentum, the product of mass with velocity, or energy-momentum in spacetime dynamics). This doubles the number of required dimensions of the “configuration space”, as there are as many momentum components as there are coordinates of position, but it does not endow that “enlarged” space with a complex structure for as much. What we get instead is a “symplectic” structure (another vulgar math term) that describes the dynamics in a “space of planes” (planes come to replace axis for “coordinate systems”) and the geometry of such spaces no longer obeys Riemann’s axiom, but rather Grassmann’s. The connection with quantum mechanics was later made when people realized that the spin of a particle can be used as the classical “rotational momentum”, vector product [see B132, § beginning with “but let’s now reverse the problem”, before formula (6)] of the position and the momentum of the system and this, despite the fact that the spin is a purely quantum quantity with no classical equivalent. Hence the use of “q variables” or “coordinates” for spin-1/2 (the most fundamental half-integer spin), with anti-commuting properties and the connection I made in B131, formula (5) between these anti-commuting variables (measured in m1/2) and “more familiar” commuting variables xi+4. What enables this are the Pauli “transition” matrices.
I had to explain all this before coming to the subject, or the non-familiar reader would have understood nothing (and still, I hope he grabbed something of the brief introduction I made!).
What comes directly out of complex geometry is that objects remain single (it’s very important to keep this in mind) but, when projected onto real sub-spaces, they become double: we find what we call in maths a “real part” which is here the state-(1) component of a super-quantity (in z = x+iy, it’s then x) and an “imaginary part” which is the state-(2) component of that super-quantity (that is, y).
And this is pretty understandable, if we think of it carefully. If the frame itself, that is, space and time, is to mathematically complexify, it physically means it oscillates: complex geometries are the siege of oscillating spaces, times and spacetimes. So, if the frame is the very first one to oscillate, then we can legitimately expect that any object, any event and any process within that frame will oscillate as well. We can have amplifications or dumpings, but we will always have, in addition, an oscillatory behaviour that we will never be able to suppress, whatever we do or try: objects become signals and signals become objects.
The supersymmetric frame is “essential fundamental”. It is so fundamental, “in essence”, that it actually gives birth to both matter and radiation. It’s a very “primitive” frame, in the sense of “original”. From this 4-dimensional “oscillating” frame emerge, as 4-dimensional “projections” into real sub-spaces, both matter and radiation fields. The supersymmetric association between them then says that, what is observed as behaving like “matter” (resp. “radiation”) in one of the two available “sub-worlds” will behave like “radiation” (resp. “matter”) in the other sub-world.
However, there’s more. Much more. Supersymmetry does not “only” unifies matter and radiation, giving unidentifiable very primitive substance that vaguely resembles “radiating matter” or “material radiation”, to give a rough idea of it (but is actually none of these), it also unifies substance and anti-substance. Yes, dear. And this is thanks to that “new math operation” called complex conjugation, which becomes an inherent property of supersymmetric spacetimes (whereas it desperately remains an “external” operation in real sub-spacetimes). We saw this in formula (7), B132. What complex conjugation does is it reverses the sign of the state-(2) component, while keeping that of the state-(1) component unchanged:
(3) q -> -q => x -> x , y -> -y
It’s also possible to reverse the sign of the state-(1) component while keeping that of the state-(2) unchanged, but it’s a bit more complicated and requires some additional operation. To go from z = x+iy to –z* = –x+iy, we first need to perform complex conjugation, then reverse the sign of the result (z*). As i² = -1, we can also write our result under the form –z* = i²z*. We then use the remarkable (in all sense) properties of the two known (up to now) irrational numbers, e = 2,718281828456... and p = 3,1415926535... which, combined with the imaginary unit i give this:
(4) eip = -1 , eip/2 = i
(the 3rd known fundamental number, the Euler number, is still not formally proven to be an irrational). These above formulas are truly remarkable and probably amongst the most remarkable ones in mathematics, as they “close onto each other”. Applying (4a) to our result gives us –z* = i²z* = eipz* = eipre-iq (in polar representation) = rei(p-q) (additive properties of the argument of the exponential function), that is, a phase shift of p-q from the original angle q. As a conclusion, to reverse the sign of x without touching that of y, we need reverse the sign of the phase angle q of our supersymmetric quantity z, than shift of +p or 180°.
What does this all mean?
It means that, when it comes to considering a quantity like mass, we now have to deal with an oscillating mass:
(5) M = m1 + im2 = mexp(iq) = mcosq + isinq
measuring the amount of “substance” contained inside an oscillating “super-object” (with an oscillating volume, by the way) and only the magnitude m of that mass, which is a real-valued quantity, is assumed to always be positive or zero. This is because the sign of the magnitude doesn’t matter anymore, since it’s now assured by the value of the phase angle q, so that m can always be set non negative once for all:
- if 0 < q < p/2 (sector I on the unit-radius circle), both m1 and m2 are > 0, this is interpreted in sub-worlds as “matter and radiation”;
- if p/2 < q < p (sector II), m1 < 0 while m2 > 0, “antimatter and radiation”;
- if p < q < 3p/2 (sector III – diametrically opposed to sector I), both m1 and m2 < 0, “antimatter and anti-radiation”;
- if 3p/2 < q < p (sector IV – diametrically opposed to sector II), m1 > 0 while m2 < 0, “matter and anti-radiation”.
Special values are for (up to 2p):
- q = 0, m1 = m > 0, m2 = 0 (massless substance);
- q = p/2, m1 = 0, m2 = m > 0;
- q = p, m1 = -m < 0, m2 = 0, and
- q = 3p/2, m1 = 0, m2 = -m < 0.
We can already combine “everything with everything”: substance, anti-substance, massless substance, which, again, is nothing craft ;) but perfectly logical: if “super-substance” is to give birth to both matter and radiation, it has to give birth to antimatter and anti-radiation just the same, “in an equal way”, since the sign attributed to a substance is, after all, a mere question of human convention: we would have chose the (-) sign for all the substances we observe “at our scale and beyond it”, we would have counted masses with a common negative sign...
Notice that sectors are well defined and delimited:
- adjacent sectors I and II are separated by q = p/2 (turning round counter-clockwise or trigonometric sense), where state-(1) is filled with massless bodies (such as photons, for instance, quanta of electromagnetic light);
- adjacent sectors II and III are separated by q = p, where state-(2) is filled with massless bodies;
- adjacent sectors III and IV are separated by q = 3p/2, where again state-(1)’s filled with masless bodies;
- finally, adjacent sectors IV and I are separated by q = 2p (or 0, since we’re back to our starting point – we’ve accomplished a complete turn round the “counter-clock”), where again, state-(2)’s filled with massless bodies.
We can be even more precise. As m1 changes sign when “jumping” from sector I to sector II, on the I-side, m1 tends towards 0+ (massless substance), while, on the II-side, it turns to 0- (massless anti-substance): the simple fact of changing sector changes the sign of the concerned mass component and therefore of the type of substance we’re dealing with.
This is very unlikely to be possible for “ordinary” matter or radiation, for the reason we saw at the beginning of this bidouille: in a real geometrical frame, you can’t withdraw more substance than you have.
What I want to point out here is that, in “super-substance”, there’s no reason why this should still be forbidden. No physical laws now oppose to this and this faculty is in full agreement with the fact that the notions of “substance” (m > 0) and “anti-substance” (m < 0) loose all meaning, since complex mass M being no longer a real-valued quantity, comparisons like M > 0 or M < 0 have no sense. The only relation which has a sense is M = 0, which is an equality. In this case, m1 = m2 = 0 as the magnitude m of M is zero. The magnitude m is the only real-valued mass that can be submitted to a comparison with the “universal reference” zero, indicating the absence of substance and we saw that, by convention, we can always set m ³ 0...
This was all about constant masses. But we said above that, because of substance transfers, from and to the surrounding environment, masses, in practice, were not expected to remain constant all the time. Can we extend the time-dependence of mass to the complex frame? Absolutely, taking a complex time T = t1 + it2 = teia = tcosa + itsina, we even have four ways to decompose our mass function M(T) in components:
(6) M(T) = m1(t1,t2) + im2(t1,t2) = m1(t,a) + im2(t,a) = m(t1,t2)exp[iq(t1,t2)] =
= m(t,a)exp[iq(t,a)]
as we have two possible representations for time (planar or polar) and two others for mass.
Let’s look at the last one. The magnitude m(t,a) is now variable. It can varies with t ³ 0 as with the time orientation angle a. The condition:
(7) m(t,a) ³ 0 for all t ³ 0 and all 0 £ a < 2p
is not restrictive at all, since the sign of the mass components m1(t,a) and m2(t,a) is still assured by the mass angle q(t,a) [for physicists now, this “mass angle” would be found in associated isospace-time, time-like component – B131, unitary group SU(3,1)]. This mass angle is now variable as well (it has no reason to remain constant). Variable! Meaning it can change... with time... (and time angle, but this is more abstract to us).
Does this mean that each mass component can now change sign?
That’s an interesting question.
Both m1(t,a) and m2(t,a) are oscillating, since:
(8) m1(t,a) = m(t,a)cosq(t,a) , m2(t,a) = m(t,a)sinq(t,a)
Let’s fix t = 0 to be the instant we start our observation. At this instant, q(0,a) = q0(a) still depends on a. So, it’s still variable. We need a stronger condition on a, say a = 0 set to be the time angle at which we begin observing. And let q0(0) = 0 for simplicity (it all starts at zero degree angles). Then, our initial masses are m1(0,0) = m(0,0) > 0 and m2(0,0) = 0 [an m(0,0) = 0 magnitude would have no interest at all]. Suppose q(t,a) increases. If its evolution is not bounded between 0 and p/2, then, when q(t,a) will become greater than p/2, m1(t,a) will change sign. Okay, we will then go from sector I to sector II, so it may happen that the “conversion” is no longer observable to a human observer.
The same will obviously hold for m2(t,a), when going into sectors III and IV. Anyway, the simple fact of substituting M(T) for its complex conjugate [M(T)]* = M*(T*), which is a function of T*, as a development in powers of T* immediately shows, suffices to reverse the sign of m2(t,a).
Can we really withdraw more substance than available?...
Yes, IF we count negative masses as positive anti-masses. We reduce the (positive) mass of a substantial body down to make it completely disappear (m = 0). We can no longer “pump” substance “out”, right? But, what we can do is to replace the vanished substance with anti-substance. And quantum physics actually says (and shows!) that this process is absolutely equivalent to keeping on pumping substance out of the vacuum! Equivalent too, but not feasible for as much. Hence the introduction of the concept of anti-matter by Dirac, to “fill” the (relativistic) holes left by matter in energy bands. Recall that this was allowed because the wavefunctions were complex-valued quantities with magnitude and phase.
Well, this is exactly what we have here with mass, charges, and everything, including space and time themselves!
Of course, if we “stick” to “our” 4-dimensional “sub-world”, then not only all phase angles will be everywhere constant, but set to zero and we’ll recover that m1(t,0) = m(t,0) > 0, m2(t,0) = 0 for all t.
Okay. Let’s turn to biophysics and see the implications of all this.
Supersymmetry not only says, it asserts, again, based on well-observed and reproduced evidences, that the true geometry of Nature is not real but complex. It means that “we have an observation problem”. Or, in other terms, “we’re not completely blind, but nearly”... we see rigid bodies where we should see oscillating bodies. We see substance on one side, anti-substance on the other. We don’t see one transforming into the other. We are subject to observational limits. Even in our particle accelerators, we are limited by the levels of energy our apparatus can deliver: if they’re not powerful enough, we have to wait for the next generation (if not too long in distance!) to expect observing something.
You take a biological body: this is “real” substance, it’s the tip of the iceberg.
You take biological fields: these are “real” radiation fields, again, tip of the iceberg.
We take absolutely no phase into account. We don’t see this is actually imbedded into a wavy world. What we observe and study are limited properties.
I don’t say it, supersymmetry says it!
I never asserted anything, by the way, I always looked at what physics said...
I have an animal body in front of me. I’m a state-(1) observer and so is that body. I assume he’s material. He first has a supersymmetric partner in state-(2) which is radiative, that is out of my reach, since my observations are “confined” to state-(1). According to an operation that is out of my reach too, they both have an anti-counterpart. And all this actually makes a single “super-body”... all the rest are mere transformations. What’s the true substance? I just can’t define, it all looks and sounds contradictory to me... to me, if I look at the physical laws I can observe in my real geometrical state-(1) “restricted world”, it should all disappear into light. It should all neutralize. Now, it’s not the case, or I wouldn’t be there to observe and the animal in front of me wouldn’t be there either. So, why can it be so? I don’t know. The only thing I can say is that “it’s all-in-one”.
I am in front of an oscillating substance in an oscillating universe, extremely primal, where “all is in one”: matter, radiation, antimatter, anti-radiation. It all transforms under one of these 4 forms and it’s none of them at the same time. That’s the best I can describe from where I stand. To me, it’s absurd physics...
This animal in front of me has a conscience, which is an electromagnetic process. This conscience has a supersymmetric partner in state-(2) which is a material plasma of photinos: a material substance! I have light propagating along neuron cells in state-(1), I have matter particles (photinos) propagating along “virtual neuron cells” in state-(2). My photons (the ones I observe) are massless, my photinos have non-zero mass. If supersymmetry is respected, these photinos should be massless as well, but in another “state of life”.
Assume all these masses remain constant in time. There still remains the time orientation angle... that I did not take into account in my observations... According to (8) above, I can still have a dependence:
(9) m1(a) = m(a)cosq(a) , m2(a) = m(a)sinq(a)
I would have to fix my time angle a to get a fix mass angle q. That would require very restrictive physical laws... so restrictive, actually, I would have to justify them... understand: there’s nothing natural in these restrictions. It’s so unnatural that, should if set a = 0, i would fix my time arrow to t1 = t > 0, t2 = 0 and, if I set a = p, I would fix my time arrow to t1 = -t < 0, t2 = 0: in the first case, I would be unable to define a “past” in state-(1) and, in the second case, a “future” (as I would still less have complex conjugation, I wouldn’t be able to use it to reverse my time arrow...). THAT is weird... :)
Relations (9) should be clear enough by now and (8) even more: beginning with m1 and m2 both positive, I can end in many ways. Nine, to be precise (3²):
(m1 > 0 , m2 > 0) , (m1 > 0 , m2 = 0) , (m1 > 0 , m2 < 0)
(m1 = 0 , m2 > 0) , (m1 = 0 , m2 = 0) , (m1 = 0 , m2 < 0)
(m1 < 0 , m2 > 0) , (m1 < 0 , m2 = 0) , (m1 < 0 , m2 < 0)
Physical interpretations should now be easy...
(m1 = 0 , m2 = 0) in particular = NO SUBSTANCE. NOTHING REMAINING. Mere “quantum super-light”. Or super-substantial vacuum.
The Bible says that “God made us His image”. For long, it made me blink... However, if our true physical reality is to be supersymmetric entities, then, we’re all of the same original nature and it fits with the Bible’s assertion. Recall that, in the Old Testimony, nothing is referred to as a “Universal Evil”: God grants and punishes. He’s at the same time “positive for those who build and create”, “negative for those who destroy”. This, again, fits much better with supersymmetric physics. What we call the “Devil” didn’t even exist in St John’s Apocalypse: he only mentioned the “beast”, who actually referred to Caesar Neron. That “dichotomy”, that “split” between the “Essentially Positive” and the “Essentially Negative” was done much later, in the Middle Age: then came this idea of a “Universal Evil” aimed at destroying and punishing everything and everybody. That notion of a “Creator” on one side and a “Destroyer” or “Annihilator” on the other, who “fell from Heaven”.
This is not very consistent with physics.
What’s consistent with physics is that ability to turn evil or turn good. Change polarities.
What’s also consistent with physics is the “Judgement of Souls”. Assuming we consider as “souls” a whole supersymmetric body. It does not matter if the “biological projection” into state-(1) or state-(2) ceases functioning, we saw in B132 that this actually has no incidence whatsoever on the supersymmetric body, because it had assimilated that from the beginning (IF such a notion of a “beginning” and an “end” can still be given meaning). We now see that, in addition, that “biological mass” measuring the amount of biological substance (consciousness included) in an animal body can well fall down to zero (which takes a long time...), it only transforms the supersymmetric body. But it does not change it for something else, since all these transformations are also “coded” inside it! The only think that can change is m1 = m2 = 0, that is, M = 0 permanently.
Now, mix all these polarization possibilities with what monotheist religions say and make your own deductions. The way your conscience drives your acts, etc.
- Commentaires textes : Écrire
B132 : SOLVING THE OBSERVABILITY PROBLEM (AND MANY MORE...)
Le 02/01/2017
FINALLY SOLVING THAT PAINSTAKING OBSERVABILITY PROBLEM
In a complex-valued geometry, you can basically describe things two ways: whether using “planar components” or “polar” ones. Both obviously make use of the (excellent!) mathematical properties of the so-called “imaginary unit” i, the square of which is negative, equal to -1: i² = -1. Let’s explicit this in real dimension 2, that’s complex dimension 1 (and you’ll see why right now). We thus have a single object, say z, the physical significance of which has no importance for the time being. z is a complex-valued quantity. In planar components, it decomposes as z = x + iy, which is the complex transcript form of “the pair of real-valued quantities” (x,y). In polar components, it decomposes as modulus (or amplitude) and argument (or phase angle) as z = r exp(iq). Since both writings are absolutely equivalent (these are mere representations of the same object), using de Moivre formula:
(1) exp(iq) = cosq + i sinq
we find a one-to-one correspondence between these two possible representations, namely:
(2) x = r cosq , y = r sinq
or, conversely,
(3) r² = x² + y² , tan q = y/x
What this shows in practice is that, in the structure of space as in that of time itself, there appears orientation angles, that do not exist in real geometry. This is essentially due to the fact that, for each physical dimension, we no longer work along a corresponding line, but inside a whole corresponding plane: mathematically, the real plane (real dimension 2) can be made formally equivalent to the complex line (complex dimension 1, see above), assuming the properties of the imaginary unit. So, we’re back to the same notions, lines associated to each dimension, but in the frame of complex geometry: when we go from real to complex, we double the number of dimensions, when we go from complex to real, with divide the number of dimensions by two. We therefore find a time plane in place of the former time line and, similarly, three space planes in place of the former three space lines. Or, equivalently, a single complex time line and three complex space lines.
I’d like to be as clear as possible on this general aspect of things, as it’s essential: before doing anything else or going any further, you have to make your mind to the fact that, in complex geometry, there are no such things as “amplitudes” and “phase angles”, or “first and second projections” (in planar components), these are all real-valued quantities, therefore referring to real geometry. In complex geometry, the only thing we have is the complex-valued object z above. Full stop. Planar or polar representations of it appear when we report this object to real quantities.
This is absolutely crucial or you won’t understand the world of supersymmetry. The object z is a completely new object. And the notion of complex dimension also has to be viewed as a completely new notion.
Complex objects, events and dimensions have no equivalent in the physical world. At the best, they admit real representations, once projected into half-less dimensions.
Let’s take the example of time. Real time, we all understand what it is. We can feel it progressing.
Complex time?... Can anybody tell me what it is?... I can’t. All I can figure out about it is a “time plane”, or “a two-component time” or a “time with an amplitude and an orientation”, when I report that concept to my knowledge and experience of real time. I have no idea at all of what “complex time” really is, so I merely represent me two real times, “orthogonal” to them, despite I’m unable to explain what “orthogonality” is for time (nor space, by the way), since I don’t feel it in my daily life...
I find it easier to make me to the concept of a single “complex time”, for it looks more like the single real time I’m familiar with...
And look: as soon as you report that complex time to real ones, you find... “absurdities”; you find an “arrows of time” that can be shorten, even if you keep on going the same direction, and this, because of the presence of a “time orientation angle”... Look back at expressions (2): both cosine and sine are basic trigonometric functions bounded by -1 and +1:
(4) -1 £ cosq , sinq £ +1
whatever the involved angle. So, here I now find myself with two projections of my initial complex time, as to know, x = r cosq and y = r sinq that are both bounded by ±r:
(5) -r £ x , y £ +r
Question #1: where is the time I’m supposed to measure in state (1) (“my biological state”)? Is it r or is it x? It should be x, since it’s the projection of z in state (1). But, x is generally shorter than expected! So, according to the orientation I would have in the (real) time plane, the measured time in both states would “oscillate” between –r in the past and +r in the future... WHAT THE HELL DOES IT MEAN???... Assume I’m following the conventional “arrow of time” in any of these two real states, that is, starting at present, time flows towards future. For q = 0, my time interval is +r (> 0); for q = p/4 (that’s 45°), it’s already +r/Ö2 » 0,707 r, for the very same distance through space.
What the hell does it mean?... is it some “new kind of time relativity”? Not even...
As a “biological observer of state (1)”, as to any hypothetical “non biological observer of state (2)”, it sounds absurd to me or I would have perceived these “time and space oscillations” for long...
What does supersymmetry tells me? It tells me that, at the microscopic structure, these orientation angles come from an initial spin structure. It also explains me why I don’t feel this spin structure effect anymore at “my” scales: because of spin combinations in composite matter. Spin effects even tend to neutralize, as they randomly distribute in matter under “nominal” (i.e. non extreme) conditions (which is the case, in particular, of all biological matter). Even ferromagnetism is a large-scale display of collective spins and it reveals only under external influence, leading to magnets, when atomic spins collectively orientate in the direction of a magnetic field. So, at “my” scale, if nothing external to “my” surrounding medium comes to collectively favour a spin direction, I will not perceive anything from it, it will all be drown into statistical fluctuations, and I’ll still less feel anything like any incidence on space and even less on time. In other words, to my perception, the orientation angles of space and time will all be dramatically close to zero, up to an integer multiple of p (180°). The best I’ll be able to perceive, once taken an origin, will be “future” (q = 0, cos 0 = +1, positive orientation of my time arrow) and “past” (q = p, cos p = -1, negative orientation of my time arrow). That’s all...
The very same is expected to happen in state (2) for q = p/2, sin p/2 = +1 (“future”) and q = 3p/2, sin 3p/2 = -1 (“past”).
As you can see, there are much more implications to take into account in physics as there are in mathematics. Mathematics would let you believe that “great! I find a brand new property!”, physics immediately warns you “your brand new property is observed, at the best, at the level of particle physics...”, where effects are no longer drown by statistical fluctuations or dumped by any other collective or combination process.
To sum up: at the particle scale of supersymmetric theories, we do have a “brand new property”; at all higher scales, including the atomic one, we rather have a “brand new frame” occupied by “brand new objects”. Opposite to what mathematics seems to show, projecting this new frame and new objects onto any of the two possible 4-dimensional “sub-worlds” do not generate perceptible effects for as much... It’s a bit more complicated than that...
But let now reverse the problem. Assume that I want to detect anything “complex” from my state (1). I CAN’T! How would I do this? It would require me to imbed in a higher-dimensional world and, still, it won’t be enough! Why? Because an 8-dimensional real geometry is not a 4-dimensional complex one!
Consider our decompositions again. We had a mathematical correspondence between a pair of real-valued quantities (x,y) and the complex quantity z = x + iy. But, wait a minute... if I reason within the frame of real geometry, than the product of two pairs (x,y) and (x’,y’) will be... what? (x,y) actually makes a two-component vector. So, I first have to specify what kind of product I want to perform: if it’s scalar multiplication, I’ll find (x,y) x (x’,y’) = xx’ + yy’, that I can identify with (xx’+yy’,0); if it’s vector multiplication, I’ll find (x,y) x (x’,y’) = xy’ – x’y = (xy’-x’y,0). Let’s compare with the product of two complexes z and z’:
(6) zz’ = (x+ iy)(x’ + iy’) = xx’ – yy’ + i(xy’ + x’y)
since i² = -1. The two products do not correspond at all... However, I have a “magic new property” in complex geometry, which is complex conjugation:
(7) z* = x – iy = r exp(-iq)
Again, an operation that does not exist in real geometry. Let’s calculate zz’*:
(8) zz’* = (x + iy)(x’ – iy’) = xx’ + yy’ – i(xy’ – x’y)
Now, I find both my scalar product and my vector one at the same time, in a single expression, using a single (complex) product.
What this reveals should be crystal-clear: that, if I try to extend anything from the real geometry to the complex one, I’ll fail for sure, whereas, if I use the suitable quantities in complex geometry, I’ll fall for sure onto the correct real ones...
This is therefore not only about doubling the number of dimensions: the imaginary unit i was introduced by Cartan precisely because no one could find explicit solutions of the cubic equation x3 + 3px + 2q = 0 in the frame of real numbers... and, indeed, i has no equivalent in real numbers. Only it’s square, i², has, since it’s defined as being -1, a negative real number. Negative squares were actually required to solve for the cubic above.
What is real-valued is zz*, equal, according to (8) with z’ = z, to x² + y² = r². This quantity is always real and non negative.
In conclusion, it’s better to reason within the frame of complex geometry, with complex objects, events and processes, then to project the results on each real states, assuming these results will be different from those we use to measure in our state, because of the presence of orientation angles, or spin structures.
If we do this, then we do find “new possibilities”, such as the vanishing of one component only over the two. And, when this component [the “state-(1) component”, to be explicit] is the 4-dimensional momentum, what it implies is that the momentum (“quantity of motion through space”) and the energy (“quantity of motion through time”) of a supersymmetric object, turn out to be zero, simultaneously, in state (1), making this object mechanically undetectable in this state [whereas it has a priori no reason to be so in state (2)]. Conversely, if the “state-(2) component” of this object identically vanishes, it will be undetectable in state (2), but will have no reason to be in state (1).
Usually, this is obtained fixing the values of the orientation angle: for q = kp, k integer, cosq = (-1)k, while sinq = 0 and consequently, x = (-1)kr, while y = 0; for q = (2k+1)p/2, cosq = 0, while sinq = (-1)k and then, x = 0, while y = (-1)kr. This is the most general situation where the amplitude of a given complex quantity is not required to identically vanish, which would make it vanish in the whole “super space” and, as a result, in both states (practically, in physics, no amplitude = no existence – even the vacuum has amplitude...).
Complex geometry, supersymmetry and that concept of a “super spacetime” actually solve numerous problems encountered in the frame of parapsychology. Just to mention:
- non-observability or non-reproducibility of “paranormal events” [zero quantities in state (1)];
- space, time and space-time distortions [orientation angles];
- thermodynamical reversibility [complex temperature and entropy...];
- tunnelling structures: complex analysis shows that, because of orientation angles, there’s a natural link between functions of complex variables (super fields over super spacetime) and cylindrical symmetries (axial symmetries).
Otherwise said, “Tunnels” can form in a complex world as rather common dynamical structures, whereas this is far from obvious in a real world, where axial symmetry is usually seen as a lower symmetry than the most isotropic one, the spherical symmetry. For instance, evolved animals (particularly mammals), show axial symmetry as a result of the development of the embryo along the nervous system. The helicoidal symmetry of the double DNA macro-molecule is also a product of evolution. All these structures are not a fundamental property of state (1) (“our observable 4D universe”). The only one observed so far is attached to the neutrino.
Let’s come back a bit more on this question of thermodynamical reversibility, as it’s one of the most important one, since it determines if a super system can or cannot be altered in any way. We have no other choice than being a little bit technical here, as we have to perform some calculations to justify our conclusions. I’ll try to be as clear as possible.
Boltzmann’s “H” theorem established that the entropy s of a given system made of many “particles” (typically, molecules) is a function of time, s(t), and can only increase with time: ds(t)/dt ³ 0, the instantaneous variation of entropy with respect to time cannot be negative, indicating that a system always downgrades. This result was obviously obtained within the frame of real geometry.
If we try to simply “replicate” it in the frame of complex geometry, we then have to consider a complex time T and a complex entropy S. S remains a function of T, S(T) and we have to calculate its instantaneous variation. With T = t1 + it2, we have:
(9) d/dT = ½ (¶/¶t1 - i¶/¶t2)
giving:
(10) dS(T)/dT = ½ (¶/¶t1 - i¶/¶t2)[s1(t1,t2) + is2(t1,t2)] = ½ [¶s1(t1,t2)/¶t1 + ¶s2(t1,t2)/¶t2] +
+ ½ i[¶s2(t1,t2)/¶t1 - ¶s1(t1,t2)/¶t2]
[the calculation is similar to the product of complexes we performed in (6) above]
First, both s1(t1,t2) and s2(t1,t2) are real-valued entropies functions of now both real-valued times. For s1, which identifies with s1(t1) in the former real-valued context of Boltzmann, it’s already an extension, and not a small one.
The quantity dS(T)/dT being complex-valued, it’s now meaningless to set dS(T)/dT ³ 0: this comparison can only concern real-valued quantities (as zero is a real number...). As for the idea of setting the amplitude of dS(T)/dT to be non negative, it’s stupid, because obvious. So, this wouldn’t learn us anything more.
We have to examine each component of dS(T)/dT. And what do:
(11) ¶s1(t1,t2)/¶t1 + ¶s2(t1,t2)/¶t2 ³ 0
or
(12) ¶s2(t1,t2)/¶t1 - ¶s1(t1,t2)/¶t2 ³ 0
now learn us? Nothing... AB-SO-LU-TE-LY NOTHING.
Boltzmann’s H theorem, ruling ALL physically allowed processes in state (1) as in state (2) becomes totally useless and meaningless in super space...
So, it’s not only “because of a time plane”, that’s mechanical reversibility; it’s also and overall because of entropies in each state depending on both times.
Of course, as soon as we fix the time angle to kp, t2 is fixed at zero and, in state (1), s1 becomes a function of t1 alone: s1(t1,0) = s1,0(t1). We recover Boltzmann.
When we fix the angle to (2k+1)p/2, t1 is fixed to zero (or “frozen at present”, if you choose it as a time origin of measurement) and s2 becomes a function of t2 alone: s2(0,t2) = s2,0(t2) and we equivalently recover Boltzmann in state (2).
Otherwise... peanuts. You can’t talk about “downgrading” nor about “upgrading”, it has no sense. On the contrary: downgrade s1 in state (1), as in a biological system for instance. That’s ¶s1(t1,t2)/¶t1 ³ 0, okay? Assume the requirement (11) to hold. What will you deduce from both? That, if s2 downgrades as well in state (2), then ¶s2(t1,t2)/¶t2 ³ 0 too and (11) is automatically fulfilled. Otherwise, we only need ¶s1(t1,t2)/¶t1 ³ -¶s2(t1,t2)/¶t2 ³ 0, allowing s2 to upgrade in state (2)...: you go older, he goes younger...
What shall we rather deduce from supersymmetric objects?
That they are self-regenerative. In essence.
They neither “downgrade” nor “upgrade”, despite they can carry entropy. And this helps distinguish them from quantum media, for quantum media usually carry no entropy. What happens here is fundamentally different from the wavy component of fluids or solids: there can be entropy, and therefore, ordered phase and disordered ones, but its time evolution is submitted to no particular constraint. Supersymmetric systems can evolve the way they want, without contradicting physical laws nor leading to paradoxes.
And THIS is essential. Because THIS is actually the key to “everlasting systems”.
The “biological component” can well downgrade, it has no impact whatsoever on the “super body”, because this possibility is integrated within the larger system. As is the possibility of a downgrading of the “super partner”.
It has nothing to do with consciousness, even nothing to do with “survival after biological death”, it’s an inherent property of “timeless” super systems: everything can “loop”, everything can “go back in time”, than “into the future again”, everything can reverse, refresh. This would systematically lead to paradoxes in real geometry, it does not in complex geometry. Right on the contrary, it’s now a fundamental property of the physical frame itself.
No “beginning”, no “end”: these notions are linked to an “arrow of time” and loose all sense in super spacetime.
I’M DEAD, SO WHAT?
I’m a product of natural evolution. I’m not artificially made. I begin a new cycle, that’s all.
(as always, i hope everybody will be able to properly read the math symbols introduced in the text, as the view seems to depend on the processor used)
- Commentaires textes : Écrire