# doclabidouille

## B 148: TIME, ENERGY AND SIGNS

*Le 13/09/2018*

Before continuing on anything else, I’ll like to examine that question of time in the quantum, because it’s a central point that extends to energy, since they are dual quantities.

Classically, the distinction between “space” and “time” is known since Einstein’s relativity as being linked to the SIGN of the diagonal components of the metrical 2-tensor of the physical frame. In plane Minkowski space-time, we have g_{aa} = -1 (a = 1,2,3) and g_{44} = +1 for a “time-like interval” and g_{aa} = +1, g_{44} = -1 for a “space-like interval”. Physically, this is required for the velocity of light in the vacuum to be guaranteed “invariant”, i.e. with the same value in all coordinate systems. It appears that, along the four main directions of space-time, the requested “surface element” must be of the form s² = (x^{4})² - S_{a=1}^{3} (x^{a})² for the time-like formulation, or s² = S_{a=1}^{3} (x^{a})² - (x^{4})² for the space-like one.

This all changes when we go to the quantum. In place of the four classical coordinates x^{a} (a = 1,2,3,4), we find for quantum coordinates:

(1) x^{a}(ksi^{a}) = x^{a}exp(iksi^{a}) (a = 1,2,3,4)

that’s four external x^{a}s together with four internal ksi^{a}s. So, even if the topology of the quantum space remains Riemannian because we need the physical vacuum to remain boson-like, the classical g_{ab} turns into the quantum:

(2) g_{ab}(gam_{ab}) = g_{ab}exp(igam_{ab}) (a,b = 1,2,3,4)

and the symmetry of the quantized 2-tensor,

(3) g_{ba}(gam_{ba}) = g_{ab}(gam_{ab})

imposes that,

(4) g_{ba} = g_{ab} , gam_{ba} = gam_{ab}

so that BOTH the external AND the internal topologies are Riemannian.

The great advantage of (2) versus the classical metrical tensor is that component signs are now determined by the values taken by the INTERNAL metrical tensor, so that, in EACH direction a, g_{aa}(gam_{aa}) can give projections with either a positive or negative sign. As a direct consequence of this, we no longer require that the external tensor be of ALTERNATED signature, like at Minkowski. Right on the contrary, we have physical interest in having it FULLY EUCLIDIAN. So, let’s place ourselves in plane 4-space and set g_{ab} as the Kronecker delta:

(5) g_{aa} = +1 , g_{ab} = 0 (a,b = 1,2,3,4; a <> b)

The off-diagonal components of gam_{ab} are out of the game, so that we can restrict to the four gam_{aa}. This gives a:

(6) g_{aa}(gam_{aa}) = exp(igam_{aa}) (a = 1,2,3,4)

and is far enough to attribute various signs to the projection spaces. In particular:

(7) gam_{aa} = pi (a = 1,2,3) , gam_{44} = 0 -> time-like Minkowski metric

(8) gam_{aa} = 0 (a = 1,2,3) , gam_{44} = pi -> space-like Minkowski metric

and many other possibilities, since we have a QUADRUPLE CONTINUOUS INFINITY OF CHOICES. So, when the metrical tensor explicitly depends on the coordinates, as in curved quantum space, we can even witness a CHANGE IN THE SIGN OF THE DIAGONAL COMPONENTS FROM ONE POINT TO ANOTHER. It follows that:

The notion of “time” LOOSES ALL PHYSICAL SIGNIFICANCE IN THE QUANTUM.

And, with it, the dual notion of ENERGY.

In quantum space, we can only talk about space and momentum.

This has interesting consequences on the behavior of SIGNALS. In the classical, we had a delay of order ct, due to the FINITE velocity at which a signal propagated. In the quantum, that term turns into c(khi)t(tau) = ctexp[i(khi + tau)]. c and t are always non-negative quantities. But khi and tau are SIGNED ones. So, when khi + tau = 2npi, n in **Z**, we find a delay of order ct; but when khi + tau = (2n+1)pi, we find a delay -ct, that is, an ADVANCE of order ct. And this is perfectly consistent. It has many possible interpretations: whether signal propagation is reversed (c -> -c), or time is (t -> -t), or many others as long as khi and tau satisfy khi + tau = (2n+1)pi. Remember that, in the classical, only x - ct spatial dependence were kept, because x + ct represented an INCIDENT signal hitting a given system of bodies. In the quantum, the interpretation is radically different. It all becomes a question of STATES. In x^{4}(ksi^{4}) = c(khi)t(tau), the INTERNAL position is ksi^{4} = khi + tau. It also defines the STATE associated with x^{4} = ct >= 0. An “advance” then becomes OPPOSITE IN PHASE TO A DELAY. There’s no “space or time reversal” any longer: these are all EXTERNAL PERCEPTIONS. In the classical solution to the 1-dimensional wave equation:

f(x - ct) + f(x + ct)

x, c and t were SIGNED quantities, needing f(x + ct) to be REJECTED because it didn’t represent a signal PRODUCED by the system. Signals propagating in the vacuum at a finite velocity c, there was NECESSARILY a delay between their emission and their observation and that delay was modeled by x - ct. So, “advanced signals” were rejected.

In the quantum,

f(phi)[x(ksi) - c(khi)t(tau)] + f(phi)[x(ksi) + c(khi)t(tau)]

nearly make a “REDUNDANCY” so that we can keep f(phi)[x(ksi) - c(khi)t(tau)] alone and get the same results, since signs depend on the INTERNAL. Thus, for:

(9) (phi, ksi, khi, tau) = (0,0,0,0) => f(x - ct)

we retrieve the classical delayed signal, but with

(10) (phi, ksi, khi + tau) = (0,0,pi) => f(x + ct)

we find the ADVANCED signal. Again, WITHOUT PERFORMING ANY EXTERNAL REVERSAL. This last signal is still PRODUCED by the quantum system, but externally observed, it LOOKS LIKE an “incident wave” coming from outside to hit the system… :)

The great lesson of complexification is to learn us that:

“POLARITIES” OR THE NOTION OF SIGNS IS AN INTERNAL MATTER.

It’s INTERNAL, it does NOT belong to the external…

Want a blatant example? The Newtonian electrostatic potential between two electric charges q and q’ is:

(11) U(r) = qq’/(4pi)e_{0}r

Where e_{0} (actually, epsilon_{0}) is the electrical “permittivity” (= conductivity) of the “classical vacuum” (i.e. the medium OUTSIDE the system of charges). Quantize q and q’ through complexification and write them in the planar representation:

(12) q(theta)q’(theta’) = q_{1}(theta)q’_{1}(theta’) - q_{2}(theta)q’_{2}(theta’) +

+ i[q_{1}(theta)q’_{2}(theta’) + q_{2}(theta)q’_{1}(theta’)]

Now, set:

(13) q(theta) = q + i(4pie_{0}k)^{1/2}m

where q and m are CLASSICAL charge and mass, respectively. What do you get?

(14) tan(theta) = (4pie_{0}k)^{1/2}m/q

valid for ANY value of q and m… And what do you get in (12)? THE MINUS SIGN OF GRAVITATION IN THE REAL COMPONENT…:

(15) q(theta)q’(theta’) = qq’ - (4pie_{0}k)mm’ + i(4pie_{0}k)^{1/2}(mq’ + m’q)

plus two charge-mass couplings in the imaginary part. Theta is zero OR PI for m = 0, and pi/2 OR 3PI/2 for q = 0… So, the minus sign in (12) DOES arise from i² = -1, which is a PURELY QUANTUM AND THEREFORE INTERNAL PROCESS… hence the presence of pi in the INTERNAL potential of gravity in the previous article.

If you stick to the classical, you find NO CLEAR EXPLANATION to the reversal of the sign between electrostatic and gravitostatic interactions.

If, instead, you quantize the electrostatic field IN SUCH A WAY that it unites the concept of charge with that of mass, that sign reversal appears NATURALLY…

The physical consequence is dramatically different: while you classically cannot find any accumulation of charges, you do find an accumulation of masses with same sign. But if you shift theta of pi/2, then (13) becomes iq - (4pie_{0}k)^{1/2}m and you find an accumulation of CHARGES while no accumulation of masses… This is simply because electric charges and masses are now treated on an equal footing.

I noticed a rather astonishing property of complex-valued quantities. Let’s consider again two quantum masses m(mu) and m’(mu’). Coupling them gives a resulting mass:

(16) [m”(mu”)]² = m”²exp(2imu”) = m(mu)m’(mu’) = mm’exp[i(mu + mu’)]

(17) m”² = mm’

(18) mu” = ½ (mu + mu’)

the external mass m” is the geometric average of m and m’, while the internal mass is the arithmetic average of mu and mu’. This is already well-known. What’s new in terms of physical interpretation is this:

(19) m(mu)m’(mu’) = m(mu’)m’(mu)

COUPLING QUANTUM QUANTITIES OF THE SAME KIND ENABLES THE EXCHANGE, WHETHER OF THEIR EXTERNAL COMPONENTS OR OF THEIR INTERNAL ONES (which amounts to the same).

We have no classical equivalent, because we lack internal variables. Coupled quantities need be of the same kind because coupling, say a mass with a velocity and exchanging their internal components is meaningless, as you can’t associate an external mass with an internal velocity and an external velocity with an internal mass.

This result easily generalizes to n quantum quantities of the same kind. Let’s take, for instance, two quantum lengths x^{1}(ksi^{1}) and x^{2}(ksi^{2}). Their coupling gives:

(20) x^{1}(ksi^{1})x^{2}(ksi^{2}) = x^{1}(ksi^{2})x^{2}(ksi^{1}) = x^{1}x^{2}exp[i(ksi^{1} + ksi^{2})] = s²exp(2isig)

This time, exchange of internal variables is possible, because they are two internal lengths. Externally,

(21) s² = x^{1}x^{2}

is an area. Internally,

(22) 2sig = ksi^{1} + ksi^{2}

is HALF A PERIMETER (logarithmic correspondence as always). Take three x^{a}(ksi^{a}):

(23) x^{1}(ksi^{1})x^{2}(ksi^{2})x^{3}(ksi^{3}) = x^{1}x^{2}x^{3}exp[i(ksi^{1} + ksi^{2} + ksi^{3})] = v^{3}exp(3isti)

Externally,

(24) v^{3} = x^{1}x^{2}x^{3}

is a volume. Internally,

(25) 3sti = ksi^{1} + ksi^{2} + ksi^{3}

is again half the perimeter of a 3D internal volume (take a parallelepiped with sizes ksi^{a} and check). There are exactly 3! = 6 ways of exchanging the three ksi^{a}s. By recurrence, we immediately see that:

There are n! ways of exchanging the ksi^{a}s (a = 1,…,n) in the coupling:

(26) x^{1}(ksi^{1})…x^{n}(ksi^{n}) = v_{n}^{n}exp(insti_{n})

There’s only one n-dimensional external volume and it’s always NON-NEGATIVE:

(27) v_{n}^{n} = x^{1}…x^{n} >= 0

and there’s a n-dimensional HALF-PERIMETER:

(28) nsti_{n} = ksi^{1} +…+ ksi^{n}

The quantity v_{n} schematically represents the size of a n-dimensional hypercube with a hyper-volume equivalent to x^{1}…x^{n}. The quantity sti_{n} schematically represents half of the perimeter of a n-dimensional parallelepiped, divided by the total number of its sides.

What becomes properly amazing once given a physical content is that internal substances of the same kind can exchange external substances and external substances of the same kind can exchange internal ones:

External or internal substances can be TRANSFERRED to another physical object of the same kind.

Now, whereas external amounts are never negative, internal ones can either be positive, null or negative. So, when they’re transferred, they are WITH THEIR SIGNS. When it comes to quantum fields, we have couplings like F_{1}(PHI_{1})…F_{n}(PHI_{n}), each field depending on the four space variables x^{a}(ksi^{a}). Independent on those variables, the n fields can exchange their external or internal components. This is normal, after all, since a coupling means an INTERACTION between the fields… and, when two physical objects or more interact, they EXCHANGE THEIR INFORMATIONS, i.e. their CONTENTS.

- Commentaires textes : Écrire

## B 147: TWO QUANTUM MASSES COUPLING

*Le 09/09/2018*

The reader understood it: after several attempts to get rid of i, I finally reintroduced it, because I found it a physical content.

In this article, we’re going to talk about MASS COUPLING, because there are important consequences.

As any other physical quantity, a QUANTUM MASS is a complex-valued quantity:

(1) m(mu) = mexp(imu)

It measures the quantity of quantum substance within a delimited quantum volume. As an amplitude, the external mass m is ALWAYS a non-negative quantity. This means that, in the quantum, we will ALWAYS deal with SUBSTANCE:

THERE’S NO “ANTI-SUBSTANCE” IN THE QUANTUM.

Wow… will immediately hurt the quantum physicist… K Not that much actually. We now know that the SIGN of a quantity all depends on the INTERNAL factor. The notion of “anti-substance with a positive energy” arose from the classical vision, where signs are arbitrary. People preferred to talk about this rather than about “substance with negative energy (or mass)”, because such substance was not observed at levels immediately higher than that of “elementary” particles (today, it’s possible to create “anti-atoms”, but they remain highly unstable and must be kept inside magnetic fields so as to avoid interacting with any atoms). The internal mass mu of a quantum body is perceived as a “mass state” by a CLASSICAL observer. In other words, in some 1-dimensional ISO-space, we have that mass representation made of a pair (m,mu) of masses where m is classically perceived as being “the” mass of a quantum object, while mu is classically perceived as representing the STATE in which m is. As a result, when mu = 0, m(0) = m > 0 appears perfectly logical, whereas mu = pi gives m(pi) = -m < 0 or “anti-substance”. What we instead have is actually some substance (m > 0), but in a phase mu = pi OPPOSITE to the phase mu = 0. We also find much more: we find that m(pi/2) = im and m(3pi/2) = -im are no longer “abstract”, but represent PURELY QUANTUM MASSES, opposite in phase. All other values of mu are matters of PROJECTIONS. We have a first projection m_{1}(mu) = mcos(mu): that’s what was previously assumed to be “classical”. We have a second projection m_{2}(mu) = msin(mu) that was REJECTED in the classical, but INCLUDED in the quantum. Thus, m_{1}(mu) is the quantity a classical observer will perceive, while im_{2}(mu) is the quantity a PURELY QUANTUM OBSERVER will perceive. The presence of i is important here: again, as a REAL-valued quantity, m_{2}(mu) is CLASSICAL, whereas im_{2}(mu) is purely quantum, because i is.

Consequently, as mu changes, so do m_{1}(mu) and m_{2}(mu), which are SIGNED quantities, and this is absolutely normal, since m(mu) CHANGES STATE. The important thing is that m does NOT change, so that:

EXTERNALLY, we keep THE SAME AMOUNT OF SUBSTANCE.

What is likely to change is the INTERNAL amount of substance.

It would be perfectly possible to attribute an object to EACH value (m,mu), but this would lead to a plethora of particles. We can drastically reduce that number considering that a given object has CONSTANT external mass in VARIABLE STATES. This also has the advantage of unifying the former concepts of “substance” and “anti-substance”: it’s well-known indeed that a particle and its “anti-partner” have SAME MASS AT REST. Yes, indeed: same EXTERNAL mass at rest… :) but DIFFERENT INTERNAL MASSES. And that’s what enables us to distinguish them, at least on the mass level.

Before quantizing, we now have to ask ourselves why the gravitational interaction that couples masses has a classical potential OPPOSITE IN SIGN with the electromagnetic interaction that couples electric charges. Potentials are SCALAR quantities, so we cannot invoke any space orientation. There’s another reason. Of course, this was set so because it was observed that two electric charges with same sign REPULSE, whereas two masses with same sign ATTRACT. We now need to understand why that reversal. The Newtonian G-potential between a mass m’, acting like the source, and an incident mass m, is given by:

(2) U(r) = -km’m/r

where r is the distance between the two masses. Such a potential immediately eliminates all possibility of self-interaction, since it diverges near r = 0. Now, r is assumed to be a NON-NEGATIVE quantity, since it represents a RADIAL distance. However, Penrose admitted the possibility of NEGATIVE values for r, synonymous of a repulsion “beyond a central singularity” (crazy how the fact of ARBITRARILY defining sign can lead to multiple interpretation attempts…). It occurs that we can rewrite U(r) this way:

(3) U(r,pi) = (km’m/r)exp(ipi)

We didn’t change anything. But we enlighted a SPATIAL STATE rho = pi, while the AMPLITUDE of the potential becomes:

(4) U(r,0) = km’m/r > 0

a NON-NEGATIVE QUANTITY… In other words, what we just did was to REFORMULATE the CLASSICAL observation (2) a QUANTUM way. Instead of saying “gravity is attractive between two masses with same sign”, we say:

EXTERNAL gravity is ALWAYS REPULSIVE and, ACCORDING TO THE STATE THE G-POTENTIAL IS IN, we’ll have attractions or repulsions.

The question of mass signs is solved complexifying (2). U(r) turns:

(5) U(UPS)[r(rho)] = -k(kap)m’(mu’)m(mu)/r(rho)

giving an external potential,

(6) U(r) = km’m/r

just like (4), but with NOTHING BUT NON-NEGATIVE QUANTITIES, and an internal potential

(7) UPS(rho) = kap + mu’ + mu + pi - rho

The minus sign in (5) BELONGS TO THE INTERNAL. It’s a pi-shift. kap is an internal parameter required because nothing allows us to assert that the constant of physics we, as classical observers, measure in the vacuum, are truly “universal”, i.e. the same in all quantum states. CLASSICALLY universal, they are; QUANTUM universal is nothing for granted at all. Asserting this would be pure speculation for the time being.

Look at the form of the internal G-field: it’s GLOBAL (independent of r) and LINEAR in rho.

Externally, the Newtonian static G-field is a typically DECONFINED FIELD.

Internally, it is a CONFINING FIELD.

At the critical distance:

(8) rho_{c} = kap + mu’ + mu + pi => UPS(rho_{c}) = 0

the internal field just vanishes. At all other distances, it grows in absolute value with the internal distance.

(9) rho = 0 => UPS(0) = rho_{c}

So, even in the worst case where we would fix kap, mu’ and mu to zero, we would still find the non-zero value UPS(0) = pi: this is precisely where the minus sign comes from, in the classical model.

The external potential (6) is clearly a potential WALL, that is, schematically, a potential barrier with unlimited height at r = 0. This now represents the shortest external distance one can find. It can no longer be prolonged to negative values.

Opposite to U(r), which is now strictly positive and only asymptotically zero, UPS(rho) can be either positive, negative or zero. We’ll have:

(10) rho < rho_{c} => UPS(rho) > 0 => INTERNAL REPULSION

(11) rho = rho_{c} => UPS(rho) = 0 => LIBRATION POINT

(12) rho > rho_{c} => UPS(rho) < 0 => INTERNAL ATTRACTION

What’s interesting in the Newtonian static potential is that, even once quantized, the external field only depends on external quantities and the internal field, on internal ones. This is obviously far from being that simple for other field distributions. That kind of field thus gives us a quite good idea of the mechanisms at play. We can see that the functioning of the internal G-field is RADICALLY different from that of the external one, because the correspondence between the two is LOGARITHMIC. So, what were external products turn into internal sums and internal products turn into external POWERS (exponentiation).

Let set again kap, mu’ and mu to zero, as in the classical. We’ll find internal attraction at internal distances rho GREATER than (2n+1)pi, where n is a non-negative integer. It means that, for rho = 0, we have NO CHANCE to find assemblies of internal substance in this model. At the libration points rho = (2n+1)pi, we have those assemblies of external substances. These are the only situations where substantial assemblies are possible. It shows that, even in such a simple model, substantial assemblies are everything but granted. This is because the notions of “attraction” and “repulsion” relate to a SIGN and become meaningless in the quantum, where complex-valued quantities have NO DEFINITE SIGN.

Aside of the potential energy, let’s now introduce the kinetic one, K(KAP). The total energy of the system will be:

(13) H(ETA) = K(KAP) + U(UPS)

Externally:

(14) H² = K² + U² + 2KUcos(KAP - UPS)

Because of interferences, external energies are generally NOT additive quantities.

They turn to be only for KAP - UPS = 2npi, n in N, in which case H = |K + U| and KAP - UPS = (2n+1)pi, in which case, H = |K - U|. Everywhere else:

(15) 0 =< |K - U| =< H =< |K + U|

and, as K and U are >= 0, we have K + U = 0 only for K = U = 0. When K = U, H can reach zero. But:

When K is different from U, the LOWEST accessible energy threshold is |K - U| > 0.

Compare with the classical, where H = K + U, with K and U SIGNED quantities…

The total INTERNAL energy now:

(16) ETA = Arctan{[Ksin(KAP) + Usin(UPS)]/[Kcos(KAP) + Ucos(UPS)]} (mod pi)

It will vanish along:

(17) Ksin(KAP) + Usin(UPS) = 0

while reaching pi/2 for:

(18) Kcos(KAP) + Ucos(UPS) = 0

These are two Fresnel-like equations. Look at (17): it doesn’t require any of the four variables involved to be zero for ETA to reach zero. On the contrary, if, say KAP = 0 (mod pi), then we need U = 0 or UPS = 0 (mod pi); if K = 0, same. And similar results for (18). Conclusion:

One can have a QUANTUM kinetic energy and a QUANTUM potential energy and still find a CLASSICAL total energy or a PURELY QUANTUM one… :|

In space relativity, K is classically defined as K = ½ mv², with v the RESULTING velocity. In the quantum,

(19) K(KAP) = ½ m(mu)[v(sti)]²

(20) K = ½ mv² >= 0

(21) KAP = mu + 2sti

(do NOT confuse with the kap of Newton’s gravitational constant!)

We have a LINEAR progression internally vs a PARABOLIC progression externally. The other significant difference is that KAP can turn negative. If we have KAP = UPS, we immediately find ETA = KAP = UPS, but if we have K = U, we only find:

Exp(iKAP) + exp(iUPS) = 2exp[i(KAP + UPS)/2]cos[(KAP - UPS)/2]

leading to,

(22) H = 2K|cos[½ (KAP - UPS)]| , ETA = ½ (KAP + UPS) (K = U)

And if we have K = -U (which NO LONGER corresponds to the mechanical equilibrium in the external, because the additive property of energy is lost), from:

Exp(iKAP) - exp(iUPS) = 2exp[i(KAP + UPS + pi)/2]sin[(KAP - UPS)/2]

we find,

(23) H = 2K|sin[½ (KAP - UPS)]| , ETA = ½ (KAP + UPS + pi) (K = -U)

Thus, in the two cases K = U and K = -U, the total internal energy is a LINEAR SUPERPOSITION of the internal kinetic energy and the internal potential energy. In all other cases, it’s not.

- Commentaires textes : Écrire

## B 146: EXTERNAL VS INTERNAL MOTIONS

*Le 02/09/2018*

From the very end of the 19^{th} century to, say, the early 1920s, the quick development of spectrometric technics enabled physicist to test both the corpuscular nature and the wavy nature of atomic matter as well as the electromagnetic interaction, which was the only accessible one at that time in laboratories. One series of experiments enlighted the corpuscular behavior and another series, the wavy behavior. As the two series were about the very same particles (mostly electrons and photons), the conclusion was unavoidable: for some kind of reasons that didn’t clearly show at the macroscopic level, particles behaved BOTH as corpuscles AND waves. This was mathematically formalized by Louis de Broglie in 1924 under the name of “wave-corpuscle duality”. Since then, ALL observations confirmed that duality, which became observable at large scales once lasers and other condensed states went better mastered. Today, every astrophysicist learns, from college, that “dead” stars are made of condensed matter and show quantum properties.

The question is still opened for space-time itself, not really because of the complicated Einstein’s model for gravity, but because gravity is an extremely weak forces. But progresses have been made in recent years. Still, we’re not allowed yet to ASSERT that space-time also has that quantum duality, because no experiment revealed it so far. However, what we call “space” around us is that “emptiness of substance”. In physics, it’s nothing but a VACUUM STATE. Classically, that vacuum is zero, implying that space should be plane. Active discussions are still on about this, because Einstein’s CURVED space(-time) also show a vacuum state (outside sources) and this vacuum should no longer be zero. Anyway, we KNOW that the QUANTUM vacuum CANNOT be zero, because this has been observed numerous times. In fact, the quantum vacuum is even COMPRESSIBLE… Now, from the viewpoint of quantum statistics (the distributions of particles), space-time is expected to follow the Bose-Einstein stat, simply expressing the fact that it’s a fundamentally NON-substantial vacuum. If this wasn’t the case, the whole universe surrounding us would be full of matter. And what we observe is exactly the OPPOSITE: the universe is full of VACUUM. We can even explain why: because the vacuum precisely enables THE STABILITY OF MATERIAL ASSEMBLIES… Our solar system, for instance, is stable BECAUSE the sun and planets are separated with large vacuum areas. Most atoms are stable BECAUSE the nucleus is separated from the first layer of electrons with a large vacuum (to that scale). Etc. Vacuum states play a fundamental role, not only because they represent the lowest energy levels, but also because they stabilize material assemblies.

So, space-time being after all nothing else but a physical vacuum, its fundamentally non-substantial nature relates it to bosons, not fermions. If it’s to be quantized as all the rest, then its mathematical description must go from real to complex-valued quantities. Real quantities are for the classical description. Complex quantities are for the quantum description. More precisely, in the classical, the use of complex numbers is a mere ABSTRACT tool to make calculations much easier, but what is kept in the end is ALWAYS the real part of the result. In the quantum, on the contrary, we cannot do otherwise than keeping the IMAGINARY parts, ALL ALONG, because the wavy behavior of objects IMPOSES us to maintain the sine components or the results would just don’t match observations. It’s Hamilton’s “optical-mechanical analogy”: the equation for the mechanical (i.e. “corpuscular”) path of a system is in all points analogue to the equation for the OPTICAL path of a signal… That’s what led to the discovery of “matter waves” and to Schrödinger’s “wavefunction”.

Thus, in order to describe “quantum space(-time)” in agreement with the wave-corpuscle duality, we have no other choice but to complexify its coordinate systems. This is replacing the classical POINT x with a QUANTUM CIRCLE x(ksi) = xexp(iksi), where the initial x, off its arbitrary sign, now only make the AMPLITUDE of the “quantum coordinate position” x(ksi). That amplitude is known as always being a non-negative quantity, so the sign is now COMPLETELY DEFINED by the angle ksi: when ksi = 2npi, where n is in **Z**, we find a positive sign; when ksi = (2n+1)pi, a negative sign.

Angles in quantum spaces completely define their orientation.

There’s no arbitrary anymore. The quantization of x reveals the existence of a SECOND SET OF VARIABLES, ksi, and ksi has NO PHYSICAL UNIT. This is important, because it makes them UNIVERSAL, which is not the case for classical variables. What now enables us to distinguish them is their AFFILIATION with a dimensioned classical quantity: here, ksi is affiliated with x, measured in meters. So, we now have two kinds of COMPLETELY INDEPENDENT variables: x is an “EXTERNAL” variable, ksi is an “INTERNAL” variable. Together, they make a “QUANTUM” variable. The real dimension is doubled, but the COMPLEX dimension remains the same. It follows that the picture of the world is not that of “additional dimensions” but, instead, that of dimensions TOGETHER WITH THEIR STATES: the set (x,ksi) means we have one physical dimension IN THE PHYSICAL STATE ksi. Classically, ksi = 0 or pi. These are the only classically-allowed values. In the quantum, ksi needs not be a limited variable, because both cos(.) and sin(.) are bounded functions of their argument. So, ksi can take any value along the real line, cos(ksi) and sin(ksi) will ALWAYS remain between -1 and +1. This is very important for what will follow. Usually, we take ksi in [0,2pi[. Actually, ksi does not require to be bounded.

What happens when we complexify variables, parameters and functions and still want to apply the Newton laws of motion?

First, let’s introduce that “quantum differential” d(delta), which is nothing else but the complexified differential. When applied to a “quantum time” t(tau), it must give the same result as in traditional complex calculus, that is:

(1) d(delta)t(tau) = d[texp(itau)] = exp(itau)(dt + itdtau) = dtexp(ideltatau)

However, the second writing is improper, because d was first introduced in the frame of REAL analysis. The polar expression for the result is:

(2) d(delta)t(tau) = (dt² + t²dtau²)^{1/2}exp{i[tau + Arctan(dtau/dt)]}

so that,

(3) dt = (dt² + t²dtau²)^{1/2} >= 0

(4) deltatau = tau + Arctan(dtau/dt)

Then, let’s write Newton’s equations of motion for a constant quantum mass m(mu):

(5) m(mu)a(alpha)[t(tau)] = F(PHI)[t(tau)]

We do not consider a FIELD force F(PHI) for the time being, not to complicate the debate from the start. The acceleration of the mass is:

(6) a(alpha)[t(tau)] = [d²(delta)/d(delta)t(tau)²]x(ksi)[t(tau)]

Well, surprisingly enough, it happens that it is more convenient to work from the INTEGRAL version of Newton’s law rather than with the usual second-degree ODE. The velocity of the mass is:

(7) v(stigma)[t(tau)] = [m(mu)]^{-1}S_{0}^{t(tau)} F(PHI)[t’(tau’)]d(delta)t’(tau’) + cte

The position of that mass will therefore be:

(8) x(ksi)[t(tau)] = S_{0}^{t(tau)} v(stigma)[t’(tau’)]d(delta)t’(tau’) + cte

= [m(mu)]^{-1}S_{0}^{t(tau)}{S_{0}^{t’(tau’)} F(PHI)[t”(tau”)]d(delta)t”(tau”)}d(delta)t’(tau’) +

U.M.

where “U.M.” stands for Uniform Motion. It’s clear from (2) and:

F(PHI)[t(tau)] = F(t,tau)exp[iPHI(t,tau)]

that, in general, the motion x(t,tau) and the motion ksi(t,tau) will INTIMATELY BE INTRICATED. x(t,tau) expresses the move through EXTERNAL space (variable x), while ksi(t,tau) expresses the move through INTERNAL space (variable ksi), that is, FROM ONE SPACE STATE TO ANOTHER. Both depend on external time t and its state tau, in the general case.

Let’s look at what happens when we set:

(9) t = t_{0} = cte , x(t,tau) = x_{0} = cte

Then, x(ksi) = x_{0}exp(iksi) and x(ksi)[t(tau)] = x_{0}exp[iksi(t_{0},tau)] = x_{0}exp[iksi_{0}(tau)]:

EXTERNALLY, NOTHING HAPPENS, THE SYSTEM LOOKS STEADY AND TIME IS FROZEN.

Internally, this is not exactly the same sound. Equation (8) gives:

x_{0}exp[iksi_{0}(tau)] = m^{-1}t_{0}²exp(-imu)S_{0}^{t(tau)}{S_{0}^{t’(tau’)} F_{0}(tau”)exp[iPHI_{0}(tau”)]exp(itau”)dtau”}

exp(itau’)dtau’

Let’s simply a little bit more, just to make ourselves an idea about the internal move:

(10) F_{0}(tau) = F_{0}(0) = cte , PHI_{0}(tau) = (n - 1)tau + PHI_{0}(0) , n in **Z** - {-1,0}

Calculation is explicit and easy and gives:

(11) exp[iksi_{0}(tau)] = K_{0}exp{i[PHI_{0}(0) - mu]}{exp[i(n+1)tau] - (n + 1)exp(itau) + n}

(12) K_{0} = F_{0}(0)t_{0}²/n(n+1)mx_{0}

leading to,

(13) tan[ksi_{0}(tau) - PHI_{0}(0) + mu] =

= {sin[(n+1)tau] - (n+1)sin(tau)}/{cos[(n+1)tau] - (n+1)cos(tau) + n}

Particular values are:

(14) ksi_{0}(0) = PHI_{0}(0) - mu , ksi_{0}(pi/2) = ksi_{0}(0) + Arctan{[(-1)^{n+1} - (n+1)]/n}

ksi_{0}(pi) = ksi_{0}(0) + pi , ksi_{0}(3pi/2) = ksi_{0}(0) + Arctan{[(-1)^{n+1} + (n+1)]/n}

INTERNALLY, THERE’S AN ACTIVITY AND IT’S NOT LINEAR AT ALL…

We do find a MOVE. Now, what is concerned, more precisely? Look at (13): it does NOT involve the EXTERNAL mass m, only the INTERNAL mass mu. Conclusion:

The EXTERNAL mass m remains INERT in external space, where time is FROZEN.

The INTERNAL mass mu MOVES, and THROUGH SPACE-TIME STATES.

It even follows a rather complicated trajectory, despite the quantum force we chose is externally constant (and global) and internally linear in the time state. For n = 1, the internal force is itself constant and global and (13) gives us:

(15) PHI_{0}(tau) = PHI_{0}(0) => ksi_{0}(tau) = tau + PHI_{0}(0) - mu

which is still an UNBOUNDED motion…

Well, still-to-observed physical reality or not, the “simple” fact of complexifying everything, down to space and time themselves, due to quantization demands, already answers an important question:

Is it possible to have TWO bodies, an external one and an internal one, with the internal steady and the internal independently moving?

The answer is:

YES.

So, it’s encouraging for the rest. At least, we found ONE POSSIBLE answer. We have a classical body made of classical substance and we have that second body made of substance STATES: already their constituency IS NOT THE SAME.

The classical observer stands at space state ksi = 0 or pi and time state tau = 0 or pi. Other space or time or mass or whatever states BELONG TO OTHER “REALITY LEVELS”. As a result, they’re NOT ACCESSIBLE TO HIS/HER OBSERVATION. But ALL levels are accessible to a QUANTUM observer.

And in particular, to an INTERNAL observer, since he precisely moves ALONG STATES…

Also notice that we can even set F_{0}(0) = 0 and still have internal motion. If we had fixed this value from the start, general equation (8), we would have been tempted to deduce that x(ksi)[t(tau)] is zero (up to uniform motion), a correct result, but only implying the EXTERNAL trajectory x(t,tau), IN NO WAY THE INTERNAL ONE (a complex number is zero if and only if its AMPLITUDE is zero…).

- Commentaires textes : Écrire

## B 145: ON THE QUADRATIC EQUATION IN R

*Le 23/07/2018*

I wanna take a look at the more-than-well-known quadratic equation in **R**, because there may be something new about it. Let:

(1) P_{2-}(x) = ½ x² + bx - ½ c²

be the quadratic equation __of type I__ and,

(2) P_{2+}(x) = ½ x² + bx + ½ c²

the quadratic equation __of type II__. Let’s first examine (1). We have:

P_{2-}(x) = ½ (x² + 2bx - c²) = ½ [(x + b)² - (b² + c²)] = ½ [(x + b)² - D²]

The quantity D² = b² + c² being always non-negative, D is a real quantity and P_{2-}(x) can be factored in **R** into:

(3) P_{2-}(x) = ½ (x + b + D)(x + b - D) = ½ (x - x_{1})(x - x_{2})

(4) x_{1} = b + D , x_{2} = b - D

Let’s turn to (2). We now find:

P_{2+}(x) = ½ (x² + 2bx + c²) = ½ [(x + b)² - (b² - c²)]

As the quantity b² - c² is no longer of definite sign, the usual procedure is to set the condition b² > c² if we want to find two distinct real-valued roots.

Now… this isn’t the only possibility. b² - c² being a *hyperbolic square*, it can always be written as a product:

b² - c² = (b + c)(b - c) = D_{1}D_{2}

Let’s set y = x + b and develop:

½ (y + D_{1})(y - D_{2}) + ½ (y - D_{1})(y + D_{2}) = y² - D_{1}D_{2}

So, with:

(5) D_{1} = b + c , D_{2} = b - c

type II reduces into,

(6) P_{2+}(x) = ¼ [(x + c)(x + 2b + c) + (x - c)(x + 2b - c)]

This is not a completely factored expression as in type I, but a sum of two completely factored expressions, illustrating the “splitting” from a single D in type I to two Ds in type II.

The zeros of (6) correspond to:

(7) (x + b)² = D_{1}D_{2}

When 0 =< |c| < |b|, D_{1}D_{2} > 0 and (7) has two distinct real-valued roots:

(8) x_{1} = -[b - (D_{1}D_{2})^{1/2}] , x_{2} = -[b + (D_{1}D_{2})^{1/2}]

When |c| > |b|, D_{1}D_{2} < 0 and (7) has no root in **R**. This is of course because the curve P_{2+}(x) is entirely contained above the x axis.

The novelty here is in the presence of *two* discriminants D_{1} and D_{2} in type II, in place of the traditional single determinant D for both types. That last determinant, D = (b² +/- c²)^{1/2}, was non-linear in the coefficients b and c, whereas D_{1} and D_{2} are both linear. If it does not change the nature and existence of the solutions, it does change the structure of the polynomial, first distinguishing two types and, second, introducing two discriminants.

- Commentaires textes : Écrire

## B 144: NO MORE "SPINOR SUB-STRUCTURE" THAN BUTTER IN BRANCH...

*Le 27/05/2018*

It is usually assumed (or did I get it wrong?) that the special status of the physical space-time to be four-dimensional allows one-to-one correspondences between it and non-commutative 2-dimensional complex structures known as “spinors”.

I strongly disagree with that argument. The correspondence in question:

- y
^{a}= theta*^{A}sigma^{a}_{AB}theta^{B}

where small Latin indices run from 1 to 4 (or 0 to 3) and capital ones, from 1 to 2,

*is a contracted invariant product over the last ones*. So, it can be used in

__any__dimension and shows nothing “specific” to the dimension 4. Instead of

**C**² as the “spin space” and SU(2) as its invariant group, we can equivalently consider

**C**

^{n}and SU(n), for any n in

**N**,

*it won’t change anything to the above formula*, which can

*also*be applied to any

__commutative and real-valued__manifold of dimension d:

- y
^{a}= theta*^{A}sigma^{a}_{AB}theta^{B}(a = 1,…,d; A,B = 1,…,n)

and this is consistent with the well-known fact that “any particle with spin s is represented in its reference frame at rest as a symmetric spinor of rank 2s with 2s+1 components,

*whatever the value of s*”. For s = ½, one finds 2-component vectors; for s = 1, M

_{2}(C) symmetric matrices with 3 independent components; etc.

It follows that the above correspondence, not only have no specificity with the “external” dimension 4 (in terms of symmetries), it also makes no difference between spinors and tensors, that is, between fermions and bosons…

Geometrically, it means it does

__not__define any “anti-commutative sub-structure to the (pseudo-)Euclidian structure of Minkowski space-time or E

^{4}after performing a Wick rotation”.

In practice, it means

*it brings me nothing more able to be used to “extend” or “refine” the properties of “classical space-time”*… K

Hence this remark.

If I use Pauli’s original spin-space, it will be endowed with a skew-symmetric metric J

_{AB}= -J

_{BA}, associated with a spin ½. If I use a spin 1, I’ll simply

*double*Pauli’s indices, obtaining matrix coordinates theta

^{AB}in M

_{2}(

**C**) in place of the former theta

^{A}(symmetric, 3-component, analogue to a vector of E

_{C}^{3}, the 3D

*complexified*Euclidian space) and metric J

_{ABCD}= -J

_{CDAB}= J

_{BACD}= J

_{ABDC}(3 components as well). The Grassmann property will write V

^{A}W

_{A}= -V

_{A}W

^{A}for spin ½ and V

^{AB}W

_{AB}= -V

_{AB}W

^{AB}for spin 1… The first one will imply V

^{A}V

_{A}= 0, while the second one will give V

^{AB}V

_{AB}= 0, which is

__not__equivalent to

**V**² = 0 since, in Euclidian 3-space, the metric is

*symmetric*.

Actually, V

^{ABC…}V

_{ABC…}= 0 under a symplectic structure is perfectly normal for any completely symmetric V of rank 2s, whereas it leads to V

^{ABC…}= 0 under a Riemannian structure [and a null cone under a pseudo-Riemannian one with signature (1,n)].

- Commentaires textes : Écrire