blogs.fr: Blog multimédia 100% facile et gratuit

doclabidouille

Blog multimédia 100% facile et gratuit

 

BLOGS

Blog dans la catégorie :
Sciences

 

Statistiques

 




Signaler un contenu illicite

 

doclabidouille

B 139: QUANTUM EXTENSIONS TO CLASSICAL FUNCTIONS

Le 13/12/2017

This bidouille, very technical, about quantum extensions of classical variables and functions.

 

We begin with variables. A classical variable is a variable x(0) element of R. It’s defined up to a conventional sign. As we need a start to build anything, the quantum extension of x(0) is built, whether as the element of C:

 

(1)               x(ksi) = x(0)eiksi

(2)               i² = -1 = eipi  ,  i = eipi/2

 

using the classical irrational e and elevating it to the power a purely imaginary number iksi, or has an element of R²:

 

(3)               x(ksi) = [x(0),ksi]                          (polar representation)

(4)               x(ksi) = [x1(ksi),x2(ksi)]                 (planar representation)

 

endowed with the addition and multiplication on pairs of reals,

 

(5)               x(ksi) + y(psi) = [x1(ksi) + y1(psi) , x2(ksi) + y2(psi)]          (planar rep)

(6)               x(ksi)y(psi) = [x(0)y(0) , ksi + psi]                         (polar rep)

 

The one-to-one correspondence is guaranteed through:

 

(7)               x1(ksi) = x(0)cos(ksi)  ,  x2(ksi) = x(0)sin(ksi)

(8)               x(0) = {[x1(ksi)]² + [x2(ksi)]²}1/2  ,  ksi = Arctan[x1(ksi)/x2(ksi)]    (mod pi)

 

By using the second approach, we only call for classical trigonometric functions on building classical projections of the quantum x(ksi), which is then defined as the pair (4) of such classicals. We don’t need to combine a classical irrational e with a purely quantum iksi, as when using the de Moivre formula: (7) and (8) are fully classical. What becomes quantum are basic arithmetic operations on pairs of classicals: the left side of (5) is a quantum addition; the left side of (6), a quantum multiplication. What their right sides give are equivalences with the originally classical operations.

 

Let’s now define the quantum function f(phi) of a quantum variable x(ksi) following that procedure. We find 4 possible pairings, corresponding to as many representations:

 

(9)               f(phi)[x(ksi)] = {f(0)[x(0),ksi] , phi[x(0),ksi]}                                            (polar-polar)

(10)           f(phi)[x(ksi)] = {f(0)[x1(ksi),x2(ksi)] , phi[x1(ksi),x2(ksi)]}              (polar-planar)

(11)           f(phi)[x(ksi)] = {f1(phi)[x(0),ksi] , f2(phi)[x(0),ksi]}                                   (planar-polar)

(12)           f(phi)[x(ksi)] = {f1(phi)[x1(ksi),x2(ksi)] , f2(phi)[x1(ksi),x2(ksi)]}       (planar-planar)

 

As above, all four representations are fully classical in components and quantum appears in pairings. phi(.,.) is a 2pi-periodic function, geometrically describing a 2D torus in R3, as ksi is 2pi-cyclic as well. f(0)(.,.) is a non-negative function and geometrically describes a 2D open tube in R3. Now, f(phi)[x(ksi)] = y(psi), which only has two representations. So, the rule is actually the following one:

 

No matter the representation of the variable, that of the result will copy the representation of the function.

 

As a result of this rule, (9) and (10) will lead to the same polar representation [y(0),psi], while (11) and (12) will lead to the same planar representation [y1(psi),y2(psi)]. It’s kind of a “Markov process”: the procedure “forgets” about all former representations and only keeps the last one.

 

A quantum operator T(TAU) will be defined the same, as an application transforming a quantum function into another. It’s therefore a “function of a function” or a “functional”:

 

(13)           T(TAU).f(phi) = T(TAU)[f(phi)] = g(gamma)

 

Applying the “representation rule” leaves g(gamma) the representation of T(TAU), no matter that of f(phi). So, “sequentially”:

 

(14)           T(TAU){f(phi)[x(ksi)]} = T(TAU)[y(psi)] = g(gamma)[x(ksi)]

 

y(psi) keeps the representation of f(phi), no matter that of x(ksi); g(gamma), that of T(TAU), no matter y(psi) and x(ksi), as a consequence of the intermediary step.

 

A reciprocal quantum function is a quantum function f-1(-phi) such that:

 

(15)           f-1(-phi).f1(phi) = f1(phi).f-1(-phi) = Id(O)

 

gives the quantum identity function Id(O), where “O” stands for the “zero state” or a “universal quantum vacuum state”. Applied to a quantum variable x(ksi) element of the definition domains of both f1(phi) and f-1(-phi), it gives that variable back:

 

(16)           f-1(-phi).f1(phi)[x(ksi)] = f1(phi).f-1(-phi)[x(ksi)] = x(ksi)

 

It appears that Id(O) has no particular representation (hence it’s universality, by the way), since, as a function, it should follow (9) to (12) but, as it gives the same result as the initial variables, the representations of Id(O) in fact faithfully follow that of the variable. It suffices to look at (9) and (10) to get convinced of this:

 

Id(O)[x(ksi)] = {Id(0)[x(0),ksi] , O[x(0),ksi]} = [x(0),ksi]

 

Implies both

 

Id(0)[x(0),ksi] = x(0)  whatever the value of ksi

 

and

 

O[x(0),ksi] = ksi  whatever the value of x(0)

 

while

 

Id(O)[x(ksi)] = {Id(0)[x1(ksi),x2(ksi)] , O[x1(ksi),x2(ksi)]} = [x1(ksi),x2(ksi)]

 

applying the identity operation, or

 

Id(O)[x(ksi)] = {Id(0)[x1(ksi),x2(ksi)] , O[x1(ksi),x2(ksi)]} = [x(0),ksi]

 

applying the representation rule (the variable follows the polar representation of the function). It follows from this “non-representativity” of Id(O) that:

 

The representations of f(phi) and of its reciprocal f-1(-phi) “neutralize”.

 

Here are now two practical examples on how to construct quantum extensions of classical functions. These examples will serve us in the next bidouille. The general idea is:

 

We keep the properties of classical functions, we simply replace them with better-suited quantum ones, when applying to quantum variables.

 

As we’re going to see it, this apparent “insignificant change” actually modifies everything and brings additional informations anyway, since quantum states appear in both variables and functions…

 

 

Example 1: the quantum exponential

 

The classical exponential was defined as the elevation of the irrational e to a power a real-valued number x: ex. By power properties, this function verified:

 

ex.ey = ex+y , (ex)y = (ey)x = exy

 

while x0 was set to +1 whatever the value of x (including x = 0), by convention (and graphical confirmation). Similarly, we will define the quantum exponential e(epsilon) as the function who satisfies:

 

(17)           e(epsilon)[x(ksi)].e(epsilon)[y(psi)] = e(epsilon)[x(ksi) + y(psi)]

(18)           {e(epsilon[x(ksi)]}y(psi) = {e(epsilon)[y(psi)]}x(ksi) = e(epsilon)[x(ksi)y(psi)]

 

for any two quantum variables x(ksi) and y(psi). In particular, the previous complex-valued representation eiksi, which combined e with the imaginary unit i, is to be replaced with the better-suited e(epsilon)iksi. The polar representations are the same: (1,ksi). In that representation, former exp[x(0)eiksi] = ex(0)cos(ksi)eix(0)sin(ksi) is replaced with:

 

(19)           e(epsilon)[x(ksi)] = e(epsilon)[x(0)e(epsilon)(iksi)]

= {e(0)[x(0),ksi] , epsilon[x(0),ksi]}

(20)           e(0)[x(0),ksi] = ex(0)cos(ksi) >= 0

(21)           epsilon[x(0),ksi] = x(0)sin(ksi)

 

Particular values are:

 

(22)           e(0)[x(0),0] = ex(0) , e(0)[x(0),pi] = e-x(0)

(23)           e(0)[x(0),pi/2] = e(0)[x(0),3pi/2] = 1                    for all x(0)

(24)           epsilon[x(0),0] = epsilon[x(0),pi] = 0                     for all x(0)

(25)           epsilon[x(0),pi/2] = epsilon[x(0),3pi/2] = -x(0)

(26)           e(0)(0,ksi) = 1 , epsilon(0,ksi) = 0             for all ksi

 

As a result, the quantum function (19) will have the particular polar representations:

 

(27)           e(epsilon)[x(0)e(epsilon)(i0)] = {ex(0) , 0}

(28)           e(epsilon)[x(0)e(epsilon)(ipi)] = {e-x(0) , 0}

 

corresponding to the classical exponential and its inverse. Additionally:

 

(29)           e(epsilon)[x(0)e(epsilon)(ipi/2)] = {1 , x(0)}

(30)           e(epsilon)[x(0)e(epsilon)(3ipi/2)] = {1 , -x(0)}

 

corresponding, this time, to the same amplitude unity, but opposite quantum states. Also notice, for instance:

 

e(epsilon)[x(0)e(epsilon)(ipi/4)] = {ex(0)/sqr(2) , x(0)/sqr(2)} , sqr(.) = “square root”

 

 

Example 2: the quantum (natural) logarithm

 

This is classically the reciprocal to the exponential. Ln(.) verifies:

 

Ln(xy) = Ln(x) + Ln(y) , Ln(x-1) = -Ln(x) , Ln(xy) = yLn(x) , Ln(0+) = -oo , Ln(1) = 0

 

Originally, this function wasn’t defined for negative values of the variable. It was then extended as such to the complex domain through the formula:

 

Ln[x(0)eiksi] = Ln[x(0)] + iksi = {Ln[x(0)],ksi}            (planar representation)

 

Again, we suggest Ln(.) to be replaced with a Ln(Lambdaeta), while conserving the same properties. This gives:

 

(31)           Ln(Lambdaeta)[x(ksi)] = {Ln(0)[x(0),ksi] , Lambdaeta[x(0),ksi]} = {Ln[x(0)],ksi}

(polar rep)                                           (planar rep)

 

since we need retrieve the same result as previously. Hence, immediately:

 

(32)           {Ln(0)[x(0),ksi]}² = {Ln[x(0)]}² + ksi²

(33)           Lambdaeta[x(0),ksi] = Arctan{ksi/Ln[x(0)]}         (mod pi)

 

with the particular values,

 

(34)           Ln(0)[x(0),0] = |Ln[x(0)]| , Lambdaeta[x(0),0] = 0           (mod pi), for all x(0)

(35)           Ln(0)(0+,ksi) = +oo  for all ksi , Lambdaeta(0+,ksi) = 0 (mod pi)

(36)           Ln(0)(1,ksi) = |ksi| , Lambdaeta(1+,ksi) = pi/2 , Lambdaeta(1-,ksi) = 3pi/2 (mod 2pi)

(37)           Ln(0)(e,ksi) = (1 + ksi²)1/2 , Lambdaeta(e,ksi) = Arctan(ksi) (mod pi)

 

As a result, the quantum logarithm will have the following polar representations:

 

(38)           Ln(0)[x(0)] = {|Ln[x(0)]| , 0} = |Ln[x(0)]|

(39)           Ln(pi)[x(0)] = {|Ln[x(0)]| , pi} = -|Ln[x(0)]|

(40)           Ln(0)(0+) = {+oo , 0} = +oo , Ln(pi)(0+) = {+oo , pi} = -oo

(41)           Ln(psi/2)[{1,ksi}] = |ksi| , Ln(3psi/2)[{1,ksi}] = -|ksi|

(42)           Ln(Lambdaeta)[{e,ksi}] = {(1 + ksi²)1/2 , Arctan(ksi) (mod pi)}

 

 

B 138: OPEN 2n SPACES CAN BE CONFORMALLY CLOSED

Le 26/11/2017

This is a pretty unexpected result (one more?) that should greatly help us in our research. The

non-technician can skip the proof and go directly to the practical application of it, where the theorem is explained.

 

 

THEOREM:

 

Let X be a real manifold with dimension 2n and signature (n,n). Then, X can be made conformally Euclidian near each of its point.

 

 

In “civilized language”, what this theorem says is that any 2n-dimensional space with an open geometry can be closed, at least locally, and therefore, compactified.

 

Before giving the general proof, let us show it on a practical example. This is a transformation I never thought about and never saw anywhere else either.

 

 

Example:

 

Let’s consider a 2-dimensional plane space-time with then a single dimension of space and a metric:

 

(1)               ds² = c²dt² - dx²

 

Usually (i.e. in all publications I read so far about special and general relativity), this expression is factorized into:

 

(2)               ds² = (1 - v²/c²)c²dt²

 

where v = dx/dt is the instantaneous velocity around point x. If we introduce a polar parametrization:

 

(3)               cdt = cos(dksi)dr  ,  dx = sin(dksi)dr

 

where dksi is an angle between 0 and 2pi, not only will we be guaranteed the absolute values of both cdt and dr will stay between 0 and dr > 0, but ds² and v will write:

 

(4)               v = tan(dksi)

(5)               ds² = cos(2dksi)dr² = [(1 - v²/c²)/(1 + v²c²)]dr²

 

As for dr², it will take the Euclidian form:

 

(6)               dr² = c²dt² + dr²

 

in planar representation. Now, the crucial point is that the metrical tensor (field) is defined as belonging to the space tangent to a given base space, which is here, our 2-dimensional Minkowski space-time. So, it’s very natural, from the geometrical viewpoint, to link the metrical coefficient g(dksi), which explicitly depend on dksi, to the velocity (4) in (5):

 

(7)               g(dksi) = cos(2dksi)  or, equivalently,  g(v) = (1 - v²/c²)/(1 + v²c²)

 

But, then, our original 2-dimensional Minkowski space-time is made equivalent to a dense family of conformal 2-dimensional Euclidian spaces. As long as v will remain between 0 and c (in pure value), g(v) will remain positive and the resulting Euclidian manifold as a whole will be causal. It will be made of that set of all points (ct,x) of our original 2D Minkowski space-time such that, in the immediate neighborhood of each of these points, 0 =< v =< c and g(v) >= 0. The rest will be made of those points where v > c, g(v) < 0, which will give another locally conformal Euclidian space, a priori non observable.

 

Last but not least, the conformal factor g(v) remains bounded: g(0) = +1, g(c) = 0 and, for infinite velocities, g(v) -> -1.

 

Let’s now give the general proof.

 

 

Proof of the theorem:

 

Let X(KSI) be a complex Riemannian n-dimensional manifold of Cn and TX(KSI) its tangent bundle. If x(ksi) = [x1(ksi1),…,xn(ksin)] is a point of X(KSI), the quantum differential at point x(ksi) = x(0)exp(iksi) is that infinitesimal complex-valued quantity of order 1 of TX(KSI) defined as follows:

 

(8)               d(delta)xa(ksia) = exp{ideltaksia[x(0),ksi]}d(0)xa(0)                       (a = 1,…,n)

 

where d(0)xa(0) is always a non-negative infinitesimal quantity of order 1 we will call the classical differential of xa(0) at classical point x(0) = [x1(0),…,xn(0)]. and ideltaksia[x(0),ksi] is a quantity between 0 and 2pi. From that definition of d(delta), it’s possible to build the second quadratic form of X(KSI) as that infinitesimal complex-valued element of the symmetric tensor product of TXx(ksi)(KSI) with itself:

 

(9)               [d(delta)s(sigma)]² = gab(2gammaab)[x(ksi)]d(delta)xa(ksia)d(delta)xb(ksib)

 

where, as usual in tensor calculus, we use the Einstein’s summation convention as long as we can. In polar representation,

 

(10)           gab(2gammaab)[x(ksi)] = gab(0)[x(0),ksi]exp{2igammaab[x(0),ksi]}

(11)           gab(0)[x(0),ksi] = gba(0)[x(0),ksi]

(12)           gammaab[x(0),ksi] = gammaba[x(0),ksi]

 

As a result, (9) takes the form:

 

(13)           [d(delta)s(sigma)]² = [d(delta)s(sigma)D]² + [d(delta)s(sigma)ND

 = SSa=<b=1n exp{2isigmaab[x(0),ksi]}[d(0)sab(0)]²

(14)           2sigmaab = 2gammaab + deltaksia + deltaksib

(15)           [d(0)sab(0)]² = gab(0)[x(0),ksi]d(0)xa(0)d(0)xb(0)   (a,b = 1,…,n)

 

We’re only interested in the diagonal contribution [d(delta)s(sigma)D]², because it groups all the n main directions of the quadric (9) and, therefore, contains its signature:

 

(16)           [d(delta)s(sigma)D]² = Sa=1n exp{2isigmaaa[x(0),ksi]}[d(0)saa(0)]²

(17)           sigmaaa = gammaaa + deltaksia

(18)           [d(0)saa(0)]² = gaa(0)[x(0),ksi][d(0)xa(0)]²  (a = 1,…,n)

 

The other contribution concerns planes:

 

(19)           [d(delta)s(sigma)ND]² = SSa<b=1n exp{2isigmaab[x(0),ksi]}[d(0)sab(0)]²

 

Developed into its real and imaginary parts, (16) gives two real-valued second quadratic forms:

 

(20)           [d(delta)s(sigma)D1]² = Sa=1n cos{2sigmaaa[x(0),ksi]}[d(0)saa(0)]²

(21)           [d(delta)s(sigma)D2]² = Sa=1n sin{2sigmaaa[x(0),ksi]}[d(0)saa(0)]²

 

Both seems to be of hyperbolic type, which would lead to two 2n-dimensional manifolds X1(KSI1) and X2(KSI2) of R2n with the topology of open spaces. These spaces are known as not being compacts. However, a closer look at (20-21) shows that, since [d(0)saa(0)]² is the square of a real-valued quantity and, therefore, never negative, signs of the metrical components are only due to the trigonometric functions. The argument of these functions lays on the tangent bundle of X(KSI), not on X(KSI) itself. This is a first important aspect to remind: the information about the signature of a manifold does not belong to it, but to its tangent bundle. The second and, this time, crucial aspect to keep in mind is that, in the quantum, i.e. in mathematically complex spaces, the negative sign is formally equivalent (and even originates from) a phase opposition. Otherwise, there exists no such thing as a “negative sign”. This last aspect means that, at points x(ksi) of X(KSI) where a given metrical component appears with a negative sign, we’re in front of a phase opposition, i.e. a shift of pi. As a consequence, there is no such thing as a “causal” and a “non-causal” domain as this is the case in the classical (i.e. real geometry) any longer, everything becomes “causal”, “observable”. Some things only display “in phase” and others, “in phase opposition”. It follows that, in order to find a definite signature on X1(KSI1), we need all terms to have same sign and this is submitted to n non-restrictive conditions we’re going to explicit.

 

In sector I on TX(KSI):

 

(22)           0 =< sigmaaa[x(0),ksi] =< pi/4                               (a = 1,…,n)

 

Both (20) and (21) are positive-definite, so that all points [x(0),ksi] satisfying this set of conditions make two “causal” manifolds X1+(KSI1) and X2+(KSI2) of R2n. “Causal”, because (20), as (21), have real-valued square roots.

 

In sector II on TX(KSI):

 

(23)           pi/4 =< sigmaaa[x(0),ksi] =< pi/2                            (a = 1,…,n)

 

(20) is negative-definite and (21), positive-definite: the generated manifolds are an “anti-causal” X1-(KSI1) and a “causal” X2+(KSI2). “Anti-causal”, because “causal, but in phase opposition”.

 

In sector III on TX(KSI):

 

(24)           pi/2 =< sigmaaa[x(0),ksi] =< 3pi/4              (a = 1,…,n)

 

Both (20) and (21) are negative-definite, leading to two “anti-causal” X1-(KSI1) and X2-(KSI2).

 

Finally, in sector IV of TX(KSI):

 

(25)           3pi/4 =< sigmaaa[x(0),ksi] =< pi                             (a = 1,…,n)

 

we get a “causal” X1+(KSI1) and an “anti-causal” X2-(KSI2).

 

In each of these four possible situations, the real manifolds so obtained have the topology of a closed space and can therefore be compactified, at least locally, i.e. from point to point.

 

And this ends our proof of the theorem. Indeed, local coordinate transforms do change the values of both the amplitude gaa(0)[x(0),ksi] and the phase sigmaaa[x(0),ksi] covariantly, however, they concern X(KSI), not its tangent bundle (only the partial derivatives of these transforms do). So, as long as the n conditions (22), (23), (24) or (25) are fulfilled on TX(KSI), changing representation on X(KSI) does not change anything to the topology of the induced real manifolds, which remain Euclidian.

 

 

Practical application:

Observing quantum shapes

 

A shape is a volume surrounded by a closed surface. It is not necessary to know that volume, it only suffices to observe its bounding surface to have a clue of the shape contained in that volume. By extension, a quantum shape is a quantum volume surrounded by a quantum surface, all naturally oscillating as a consequence. What the theorem above says is that any quantum shape gives birth to two classical projections, as any other quantum object or process, a “corpuscular” or “substantially-perceived” projection and a “wavy” or “non-substantially-perceived” projection. Apparently, these two projections seem “open”, i.e. non-compact. If it was the case, we would have troubles, not only in defining quantum shapes, but also with remaining in agreement with observations. Indeed, observations show that, even if oscillating, any physical body with quantum properties keep a delimited shape, but maybe for gases, but classical gases have no definite shape either, except if they are trapped inside a delimited solid area, in which case, they spouse the shape of the area filling it. Fortunately, the theorem says otherwise and shows that, under non-restrictive conditions, both projections can actually have a closed geometry and, therefore, be compact, should this process was to be performed point to point.

 

This being said, dimension n = 2 corresponds to surfaces. As here, we’re not dealing with classical, but with quantum surfaces, the starting point is a quantum surface (surrounding a quantum shape) in quantum dimension 2. Now, as anything is grouped into pairs to make a quantum, a quantum dimension n is mathematically (but not physically!) equivalent to a classical dimension 2n (n classical pairings make n quantum components). So, first of all, with classical projections, we’ll be to deal with even-dimensional surfaces (always, a result of pairing). This poses an immediate problem to the classical observer. A quantum surface gives two classical projections of classical dimension 4, each. Now, our observer is in a 3D space only. At least, this is the way he/she perceives things around him/her. So, any quantum surface is already “too large”, “too dimensional” to be classically observed. Even if our observer was conscious of this, the only thing he/she could do would be to use perspective to try and reproduce a quantum shape re-projecting its two 4D classical projections into his/her 3D world, which wouldn’t give him/her the true aspect of the shape (just think of that exercise at school consisting in drawing a 3D cube onto a 2D sheet of paper: if perspective maintains the feeling of a volume, it can never faithfully reproduce that volume as it is in the 3D world).

 

To go round the difficult, the technician has a solution: cuts. The topographer does the same on maps.

 

For n = 2, we have two quantum coordinates x1(ksi1) and x2(ksi2), making four classical ones, namely x1(0), ksi1, x2(0) and ksi2. We have at least one “in excess”. However, what the experimenter does most of the time is that he/she observes a quantum object in a definite quantum state. That’s actually fixing the values of the two angles ksi1 and ksi2. The dimensional obstruction occurs when one tries to reproduce the quantum shape in all its possible quantum states. Opposite to this, if we admit that “we’re having a lack of physical dimensions so that were going to use the cut technic and observe that shape quantum state by quantum state”, then, for each given pair of values (ksi1,ksi2) of the angles, our two projected surfaces will only depend on the amplitudes x1(0) and x2(0), which will give back a pair of classical surfaces in classical 3-space. What we’ll then deduce from this method is that the quantum shape is made of a doubly-continuous infinity (as ksi1 and ksi2 go independently from 0 to 2pi) of “substantially-perceived” classical shapes X1ksi1,ksi2(KSI1) and “non-substantially-perceived classical shapes X2ksi1,ksi2(KSI2), “causal” or “anti-causal”.

 

This is probably the best way to reproduce a quantum shape.

 

We start from (ksi1,ksi2) = (0,0), it gives a first “substantial” surface X10,0(KSI1) corresponding to the behavior of the quantum shape at the UV limit. We then make ksi1 and ksi2 independently vary along the unit-radius circle and, each time, it gives a still “substantial” surface X1ksi1,ksi2(KSI1), but not necessarily of the same shape. And all these shapes glued together give the “substantially-perceived” shape X1(KSI1), which is 4D and cannot be faithfully reproduced, even in 3-space. But, at least, we can get “cuts”, and continuous ones, of it. We do exactly the same to build X2(KSI2).

 

 

What does it imply to biology?

 

This is rather simple, isn’t it? The biological organism is a (living) classical shape. The biologists perceives it at the ultra-violet limit, meaning for cuts (ksi1,ksi2) = (0,0) and (ksi1,ksi2) = (pi,pi) only. And still, it may even reduce to (ksi1,ksi2) = (0,0). That’s only one shape over a continuous infinity of others and this, anyway, only gives the substantial behavior of a quantum being. You still have to add to this another continuous infinity of non-substantial shapes, giving the wavy behavior of the quantum being in question.

 

And, still, after all this… it only remains projections. It doesn’t reproduce the quantum being an inch. His nature is quantum, his environment is quantum, there’s nothing reducible to the classical in that. Reducing it to classical projections, than to 2D “sections” is only aimed at trying to make an approximate picture of him. Get “a vague idea of how he may look like” when projected into a lower-dimensional world.

 

So, of course, saying there would be “a continuous infinity of biological bodies of different shapes” is actually meaningless, it’s only a matter of trying to interpret things. It does not correspond to the physical reality at all (just like the “multiverse” in quantum astrophysics has to be taken as one of the many interpretations of the so-named “wavefunction of the universe”). What a continuous parameter shows us is not that we have a continuous infinity of “parallel worlds”, but simply that we have to take a new physical dimension into account… :)

 

So, what the picture shows isn’t a “countless multitude of parallel biological organisms, each one belonging to a classical 3D world”, this is fantasy… :) (it is!), it only says “what we observe as a ‘biological organism’ is not even the tip of the iceberg, it’s only the UV limit of a much wider 4D but still classical shape, surrounding a 5D still classical body and this only makes the substantial aspect we’d be able to perceive of a quantum organism if we could have direct access, as conscious observers, to the 3D quantum world”.

 

That quantum world is entirely cyclic, it has nothing to do anymore with our classical perception of the world. It’s a geometrical environment where loops play the role of “straight lines”; tubes, the role of open curves and tori, that of closed curves. “Negativity” becomes “phase opposition”. There is nothing common to our daily life anymore. The physical laws are different, time is cyclic, everything is totally stranger to what we’re used to.

 

Despite all these fundamental differences, physics tells us that this is actually the world we live in, since our conception

 

And we can’t even perceive a fraction of it…

 

So, where else would be the limitation, if not in our brain?... :)

 

 

B 137: QUANTUM AREAS AND VOLUMES

Le 24/11/2017

We’re back to B135 and we’re now going to talk about quantum areas and volumes. We start in dimension 1. We first need to precise what kind of physical dimension we are to work in: if it’s classical dimension 1 then, indeed, there’s a single one; if it’s quantum dimension 1, there are 2 classical ones (remember we need double everything). The “wavy” dimension, also referred to as the “P2-projection”, is assumed to be located “above” the “corpuscular” one.

 

What does it means, “above”?

 

We’ll get a much better picture if we now embed ourselves in our much more familiar 3 dimensions of space. Classically, we feel we’re able to move anywhere in the three classical dimensions of space: length, width, height. Still classically, we can understand we extend this to wavelengths, considering anisotropic waves: as those waves do not propagate the same in all 3 directions, we can give them 3 independent wavelengths, one along each direction. The conceptual difficulty arises when we attempt to “glue” these 3 “wavy” dimensions to our 3 “corpuscular ones”, as we generally do not perceive these “extra-3” in current daylife: we can’t move along them, can we? So, a 6-dimensional world doesn’t really speak to us, does it? Even classical. And claiming it’s to be made equivalent to a quantum 3-dimensional one does not clear the situation at all… :) So, by pure convention, we usually assume that these 3 wavy dimensions are located “above” our 3 familiar ones, which is not true: in reality, all 6 stand on an equal footing and we are only limited in our perceptions of space around us (as usual). We cannot visualize non-solid waves, but we still can feel their effects, so that we remain conscious that waves exist, but we still cannot link them to anything “dimensional”. The picture we have of them is that they “undulate through classical 3-space”. But this is only good for classical waves. It does not represent the quantum reality. The quantum reality is 6-dimensional (space) or 8-dimensional (space-time).

 

This is one thing: additional dimensions. Then, we have the question of areas and surfaces.

 

A classically-perceived plane is a 2-dimensional space. If we consider a square inside that plane, with side x(0), than its area s(0) = [x(0)]² will always be a non-negative quantity. Negative areas cannot exist in classical space geometry, where they would be interpreted as areas “smaller than a point”, which is an object of null size, and this would lead to an absurdity.

 

Things are different in a geometry like that of classical space-time or, now, in the quantum. A quantum plane is schematized as a 2-dimensional plane delimited by that “horizontal corpuscular axis” and that “vertical wavy axis”: they’re similar, but not of the same physical nature at all (as the time dimension in special relativity was similar to any of the 3 space dimensions, but not of the same nature at all). If x(ksi) is now the size of a quantum square inside our quantum plane, then its quantum area is to be calculated as:

 

(1)               s(sigma) = [s(0) , sigma] = [x(ksi)]² = {[x(0)]² , 2ksi}

 

so that,

 

(2)               s(0) = [x(0)]²

 

remains a non-negative quantity, as the “pure area” of our quantum square, while

 

(3)               sigma = 2ksi

 

gives the quantum state our quantum area is found in when its side is found in the state ksi.

 

If a experimenter wants to measure the “corpuscular amount” of s(sigma), he/she’ll measure its P1-projection:

 

(4)               s1(sigma) = [x(0)]²cos(2ksi) = [x1(ksi)]² – [x2(ksi)]²

 

If he/she wants to measure the “wavy amount”, he/she’ll measure the P2-projection:

 

(5)               s2(sigma) = [x(0)]²sin(2ksi) = 2x1(ksi)x2(ksi)

 

According to the sector the quantum state sigma (that of the object we’re studying) is in, both projections will be either positively-counted, zero or negatively-counted.

 

If 0 < sigma < pi/2 (sector I), 0 < ksi < pi/4 (45°), then both s1(sigma) and s2(sigma) will be measured positive.

If pi/2 < sigma < pi (sector II), pi/4 < ksi < pi/2, then s1(sigma) will be measured negative while s2(sigma) will remain positive.

If pi < sigma < 3pi/2 (sector III), pi/2 < ksi < 3pi/4, then both s1(sigma) and s2(sigma) will be measured negative.

And, if 3pi/2 < sigma < 2pi (sector IV), 3pi/4 < ksi < pi, then s1(sigma) will be measured positive while s2(sigma) will be negative.

 

You’ll have noticed that, opposite to “classical” multiplication, “quantum” multiplication entangles the corpuscular and the wavy projections of the quantum side x(ksi). Despite this, s1(sigma) remains the “corpuscular” projection of the quantum square and s2(sigma), the “wavy” one. The planar representation could therefore lead to easy confusion. The polar representation is much clearer, as it precises no projection, it instead gives the amplitude and the quantum state.

 

Let’s start from ksi = 0, that’s sigma = 0. Then, s1(0) = [x(0)]² is obviously maximal and corresponds to the value given by classical geometry, while s2(0) = 0, confirming that, from a strictly classical viewpoint, the square is entirely “corpuscular”, since its “wavy side” is reduced to a point. As ksi increases, we go deeper inside the quantum plane, s1(sigma) decreases while s2(sigma) increases: our quantum area acquires more and more “wavy content” and leaves more and more “substantial content”. When ksi reaches 45°, sigma = 90°, we stand on the P2-axis, s1(sigma) = 0 and s2(sigma) = [x(0)]² is now maximal: a classical P2-observer would come to the same conclusion as our previous P1-observer.

 

Let’s keep on increasing ksi. Then, sigma becomes greater than 90°, we change sector on the quantum plane, the “wavy content” of x(ksi) becomes greater than its “corpuscular” one, forcing s1(sigma) to decrease under the value zero and turn negative. However, the physical context is very different from the one found in space-time relativity. In space-time relativity, the “absolute area” s² = c²t² - x² = c²t²(1 – vmoy²/c²), where vmoy = x/t stood for the pure value of the mean velocity of a moving body, couldn’t turn negative without going out of the observation scope. This was because the body would then move faster than the signal it produces, arriving always before it. Now, physical bodies are observed through the signal they emit. If they arrive before it, they’re non-observable… Here, nothing of this happens. What happens instead is we’re in a space with an open geometry (technically, “of hyperbolic type” – archetype: the horse saddle), like space-time, but without any specific restriction. In comparison, classical space had a closed geometry (“of elliptic type” – archetype: the rugby ball). Such a geometry allows only one sign to areas, the positive one.

 

The same holds for s2(sigma). We can always transform it noticing that:

 

2x1(ksi)x2(ksi) = ½ {[x1(ksi) + x2(ksi)]² – [x1(ksi) – x2(ksi)]²}

 

which exhibits the same structure than s1(sigma). And it goes on changing sign as quantum states go round the unit-radius circle. There’s no conceptual objection to finding negative areas in the quantum context, even projections, because there’s no definite sign in either s1(sigma) or s2(sigma), it’s now only a question of which behavior predominates on the other in the side x(ksi) of the quantum square.

 

You can straightforwardly generalize this to quantum rectangles. Taking two quantum sides x(ksi) and y(psi), the area of the quantum rectangle will be:

 

(6)               s(sigma) = x(ksi)y(psi) = [x(0)y(0) , ksi + psi]

 

Then, you examine sigma sector by sector: results will be the same. It’s just a bit more complicated because you now deal with two quantum states ksi and psi instead of a single one. You find more combinations for a given sigma, namely:

 

(7)               sigma = ksi + psi

 

instead of (3). So, instead of finding a single value ksi = sigma/2 as for the square, you find a continuous infinity of possibilities psi = sigma – ksi for each value of sigma.

 

Quantum volumes proceed the same. In place of (1), you find the quantum volume of the quantum cube:

 

(8)               v(stigma) = [x(ksi)]3 = [v(0) , stigma]

(9)               v(0) = [x(0)]3

(10)           stigma = 3ksi

 

As x(0) is never negative, nor is v(0) but, according to the sector stigma will be found, projections v1(stigma) and v2(stigma) will be positively or negatively counted or even be zero.

 

For a quantum parallelepiped:

 

(11)           v(stigma) = x(ksi)y(psi)z(zeta) = [v(0) , stigma]

(12)           v(0) = x(0)y(0)z(0)

(13)           stigma = ksi + psi + zeta

 

which, for each given value of stigma, draws a straight line, not in a “2D quantum state” anymore, but in a “3D” one. That’s a double continuous infinity of possibilities for zeta = stigma – ksi – psi.

 

In comparison, (9) or (12) show you again that classical volumes can only be found positive or zero.

 

B 136: ON A NEW MODEL FOR MIND

Le 19/11/2017

This bidouille, again, for a large public (or, at least, for the public who’s NOT AT LARGE… or not yet… :) ).

 

For what follows, I’m basing myself on what neurobiologists tell us. Let’s sum it up again.

 

Each neuron cell taken individually inside a highly-sophisticated system as the brain of mammals receives an average of 10,000 connections from other neurons, not necessarily close to it. Changeux claims it endows the soma of the cell with “a combinatorics of signals”. I disagree. Completely. Why? Because this addition of signals is then compared to a threshold, generating a “trigger effect” and the output, by the end of the axon, will ultimately be a binary (“0”: silent, “1”: active). You find exactly the same kind of dynamics inside inert medias like silicium, silicium / manganese, etc. that are used in the computer-making industry, it’s known as the “transistor effect” and it leads to no arithmetic function at all in the device…

 

So, if you look at possible arithmetic functions inside the neuron, you may be deceived…

 

Following this, Changeux precises, and this is very important, that, in most of the neurons composing the central nervous system (CNS – the brain), the synaptic cleft between two neurons is so small that only one pack of neurotransmitter can go through and, the crucial point is here: not systematically, even when the emitting neuron is active.

 

So, again, better not rely on the neuron in itself to forward information… This is good for much larger synaptic clefts such as, he explains, that between an axon of the motor system and a muscle, where some 300 packs of neurotransmitter can be scattered at the same time, guaranteeing a 100% transmission.

 

Conclusion: the more packs to be scattered, the higher the chance to vehicle information from an emitting neuron to a receiving one or, equivalently, the closer to 1 the “transmission coefficient T” of the cleft (to use an analogy with optics).

 

Unfortunately, this conclusion leaves the question of cerebral neurons wide open…

 

Basically, the isolated neuron can work 3 ways. It can be stimulated from the outside and, if that stimulation is higher than the trigger threshold value, the neuron responds. It can self-stimulate, thanks to its calcium channels. Or it can remain silent. But, whatever its reaction, the release of neurotransmitter is not 100% guaranteed in most situations.

 

Clearly, the neuron by itself (and I insist on this) cannot be kept as a “serious enough partner” for signal propagation and still less considered as an “arithmetic unit”.

 

Clearly, there must be another mechanism that improves signal transfer and “reinforces synapses”. Neurobiologists have now been knowing the complete dynamics of the neuron from soma receptors down to the very ends of their axon for 30-40 years and they still stumble on how to link it with the production of “mental objects” (percepts, memory images and concepts). I pointed out many times the way the cell works is by no means causal. So, two main hypothesis have been proposed in order to palliate this “little inconvenient”. The first one is to model the functioning of the brain giving inter-neuron transmission a “probability of occurrence” and arguing the machinery would be Bayesian (from Bayes, who established rules on probabilities for connected events). However, as I said, this would be equivalent to allowing “pieces of signals” whereas signal is transmitted as a whole or isn’t. One more difference between mathematics and physics. The second hypothesis was Edelman’s “neural groups”, where populations of interconnected neurons of various numbers would collectively respond to an excitation on any of a single member of the group, through a global mechanism of “consistent resonance”. Changeux wasn’t convinced, as datas also showed that there isn’t any static organization of any kind inside the CNS; instead, everything is dynamical and configurations change all the time. Besides, Edelman agrees in that a given instruction can be forwarded through different networks, as long as it’s forwarded and leads to the same result(s). Changeux sees this ability, that “plasticity” (or adaptation faculty) of the brain as a result of a “jungle” of connections rather than specified or dedicated units like in computers. Edelman too is strongly against comparing the animal brain to any Turing-Von Neumann sequential machine, they both say it doesn’t match observational datas at all. The difference between them is that Edelman bases himself on the existence of “neural groups” to define his “noetic” machines and even build prototypes. The thing is: even the first 2 prototypes already reasoned closer to the animal brain than to a T-VN machine!

 

It’s therefore very hard to decide which way, which representation, is the best one and the closest to reality, as they all show their inconvenients but also their advantages…

 

I’m a very basic guy, probably one of the most basic you’ll find, so I always end in going back to the very bases.

 

And the base is: we have two neurons and, between them, a certain type of a neurotransmitter. One of the neurons is the emitter, the other one is the receiver.

 

Questions: which ones are the sources and which ones are the mediators?

 

Answers: sources are neuron cells, mediators are neurotransmitters.

 

Question: what could mind be made of, then? Neurons? No: neurons make the substrate.

 

Conclusion?

 

Mind would be made as a field of neurotransmitters.

 

Indeed, what makes the mental process? Is it the biological substrate, which produces the signal with no certainty and no causality at all? Or isn’t it rather the transmission of information from one unit of that substrate to another?

 

Everywhere in Nature, you have “supports” of information and transmission. Saying mind would be made of neurons would be equivalent to saying that electrons make the electro-magnetic interaction or that masses make the gravitational one, quarks the strong nuclear one, etc. It would be confusing the sources with the vehicles.

 

That mind is a biochemical process is now beyond all doubt. But, if we search for an understanding in the internal mechanics of the neuron, we just find nothing consistent enough able to build mental objects.

 

The vehicle of information in computers is the electric current. It’s made of moving electrons. Is the internal dynamics of transistors for anything in this? No: what’s relevant is what we have as inputs and what we get as an output, period. When we create programs, in order to run machines, we don’t care about what goes on inside transistors, we take input and output bits and we combine them in order to first make basic instructions, then instructions, then programs. We use the vehicles of information, not the substrate. The substrate is there to produce information. Now, mind is information and this kind of information is only chemical, molecular. As between any non-neural cell of a living organism: two living cells communicate exchanging molecules.

 

Now, if we base ourselves on the “neural jungle”, there’s potentially no way to build consistent patterns. In order to do so, we need structure, consistency and stability. We need an organization, should it only be ephemeral and changing with time. Stability becomes a necessity for memorization, especially long-term.

 

Well, in all physical systems, such properties are only accessible to non-linearities and feedbacks.

 

So, maybe we’d rather look at the feedbacks of the field of neurotransmitters onto the output of neurons, because only there is transmission occurring.

 

What would be the basic requirement for mental processes to perform?

 

That information be suitably forwarded from one neuron to the other.

 

According to what we saw, this requires an optimality criterion, namely, that the transmission factor T between two given neurons reaches 1. That’s 100% chance of transmission. If we reach it, that “path” is “secured”. If we want to change path, we favor another transmission factor, somewhere else, and decrease the previous one.

 

This is nothing but a regulating process and, as Changeux defined it, consciousness is that process which regulates mind.

 

So, what we get here is mind, now realized as a molecular field of neurotransmitters of various types, and a regulation process answering an optimality criterion, which helps dynamically structuring mind and we call consciousness. Patterns change with time, but for memory images, which remain stable much longer. Such “long-living” patterns correspond to fixed points in dynamical systems: the state they were “in the last round” remains unchanged “in the new one”. And this, for a certain duration, that can last all life.

 

Let’s sum it up once more.

 

A neuron produces a type of neurotransmitter with only the probability T. That pack of neurotransmitters is then received by another neuron (up to possible leaks): information is transferred. The neurotransmitter is then destroyed. During molecular transfer, there’s a “quantum” of “mental information” produced. This is local. Globally now, or “less locally”, there’s a set of such “quanta”, of various types, making mind at a given time t. That structure, in turn, acts upon the synapses to reinforce their biological reactivity and, therefore, locally increase the transmission coefficient T. The next round, the same neuron will produce its neurotransmitters with a higher T. Again, mind will retro-act on its synapses until T reaches 1. However, that process is spatial: it concerns a synapse located at some point “x” of the brain. There still remains the possibility of a change in time. Changing neurons, we change network configurations (while neurons, of course, don’t move). Patterns can change shape.

 

What gave me this idea is first the arguments I exposed hereabove and, second, the analogy I checked with 20th-century so-called “semi-classical phenomenological models” of interacting particle physics. Typically, you have a source field and a field for interaction. The source field is made of particles which produce “quanta” of interaction. What happens is that the source and the interaction it produces strongly couple. If you have a system of electric charges carried by electrons, for instance, and these electrons produce an electromagnetic field between them, this field, as long as it stays inside the system of charges (inside the electronic source field) then retro-acts onto these charges, modifying its dynamics. And so on, until an equilibrium is found. It can be mechanical or thermodynamical. When it cannot be found, we’re in a situation of chaos. And there too, there are very interesting patterns.

 

I think we’d rather explore this road instead of the one consisting in believing the neuron cell, because of its axon, would serve as a “wire” to transmit information. Nothing consistent (I don’t even say “logic” or “rational”, I only say “consistent”) can get out of this. What may fool us is that the transistor, which is an inert object, has a deterministic functioning: according to its internal structure, the minerals used, the inputs, it will deliver “0” (blocked) or “1” (saturated). So, maybe the partisans of Bayesian logic (or any other fuzzy logic) would think the neuron “transmit the nervous signal with a proba of T”. It doesn’t seem to do so. It rather seems to get inputs, deliver an output, a non-determinsitic way, because it’s a living cell:)

 

 

B 135: BACK TO THE SOURCES OF THE QUANTUM

Le 06/11/2017

One might oppose me that the unexpected result in B134 could always be re-established normal by using the modulo 2pi cyclic property. However, the result would still remain unclear, as it would be equivalent to multiplying by 1… :(

 

Here’s an article that is made for the largest public possible. Specialists will find unavoidable repetitions in it, but non-technicians are everything but familiar with the highly-sophisticated technical developments which led to the present 21st-century approach of quantum physics.

 

From the very first discoveries of atomic processes by the end of the 19th century to the most synthetic models of quantum theory proposed in the late 20th century, the fundamental idea that quantum processes were genuinely wavy, i.e. oscillating, drove all the developments for more than a 100 years. It culminated by the end of the 1960s with “supersymmetric” models. These models were aimed at trying to unify matter and radiations at the level of “elementary” (i.e. non-composite) particles, but the principles upon which they were built didn’t restrict at all to the sub-nuclear level of description. I then decided to extend them, not only to much larger bodies with a much more complicated structure, but to everything, to begin with that “wave-corpuscle” duality. This is nothing else but Schrödinger’s “doctrine”, which says that absolutely all physical systems in Nature are to be endowed with a natural quantum structure. Supersymmetry only showed that the Schrödinger representation of the world was actually equivalent to doubling everything the classical approach described earlier. So, I’m inventing nothing, introducing nothing “revolutionary”, I’m just following the masters.

 

 Basically, all these supersymmetric models which, I’d like to insist on that point, didn’t get “out of theoreticians’ imagination”, but instead, were a direct consequence of an accumulation of observational evidences in particle accelerators all along the 20th century, all these models were built on the assumption that the deep physical reality is oscillating: everything in Nature naturally oscillates, down to space and time themselves, so that today’s question is no longer about “what are the physical mechanisms that damp these oscillations at scales higher than the sub-nuclear one?”, but “why don’t we directly observe and feel these oscillations in our current life?”. Surely, consciousness has something to do with this and we’re sent back to that central idea in quantum theory of that “interaction between the human observer and his surrounding environment”. But, this is not the subject of the present article.

 

The subject of the present article is to start from that conclusion that, in order to oscillate, all physical objects, events and phenomena in Nature needs be doubled. It just cannot work otherwise. If we refuse to double things, then we conflict with observational evidences: it’s as simple as that. We don’t do this because it “suits” us, but because experimental facts impose it to us. Science is everything but speculation, it’s, on the contrary, perpetual deduction. So, let’s explain how it works. We’ll have no choice but to do a little bit of elementary math, but everything will be explained step by step in detail.

 

Let’s begin with a practical example that will also set a bit of terminology.

 

Let x(0) designate a “pure distance” between two objects or between the observer we are and an object we want to observe. “Pure” means “absolute”, that is, “unsigned”. By convention, an unsigned quantity is always positive or zero. There’s nothing absurd at all in demanding this to x(0): don’t we usually measure 1 meter and not -1 meter?

 

Let’s now associate and angle ksi to x(0). ksi is a quantity that stands between 0 and 2pi radians (or 360°, that’s a complete tour around the unit-radius circle), so that, every time we had 2pi, we make a complete tour and we retrieve our original ksi (same operture). Because of this cyclic property of angles, they can be given any value in the continuum, that value can always be brought back to the interval [0,2pi] “up to a certain discrete number of tours”.

 

We therefore starts with this pair [x(0),ksi] and we call it a quantum distance in polar representation. We then call for trigonometric functions, which are built as continuous functions on the unit-radius circle and we define the “first (or “horizontal”) projection”:

 

(1)               x1(ksi) = x(0)cos(ksi)         (cos = cosine)

 

and the “second (or “vertical”) projection”,

 

(2)               x2(ksi) = x(0)sin(ksi)          (sin = sine)

 

Both obviously depend on the angle ksi. In order to understand something to what may happen, we have to be very methodical. We call:

 

-         ksi, the quantum state in which our quantum distance x(ksi) = [x(0),ksi] is found;

-         x(ksi) = [x1(ksi),x2(ksi)], the very same quantum distance, but now in planar (or “Cartesian”) representation;

-         x1(ksi), the “corpuscular-like” distance (or simply “distance”);

-         and x2(ksi), the “wavy-like” distance or wavelength.

 

Why this terminology? When the experimenter is going to evaluate that quantum distance x(ksi), he/she’s going to run two (series of) experiments. The first one is aimed at revealing the corpuscular behavior of that distance. Namely, the experimenter wants to put the light on the “little hard balls” that would serve as “solid vehicles” of space. He/she wants to exhibit the “granular structure of space”. A measure of x1(ksi) will give him/her this information. The second (series of) experiment(s) is aimed at revealing the wavy nature of space. This time, he/she sees space as a continuum or as a “signal” and x2(ksi) will give him/her this second information. From these two complementary informations, he/she’ll be able to deduce the quantum distance they’re searching for:

 

(3)               [x(0)]² = [x1(ksi)]² + [x2(ksi)]²                   (Pythagoras’ triangle)

 

will give the pure distance x(0), while

 

(4)               ksi = Arctan[x2(ksi)/x1(ksi)]                       (Arctan = Arc tangente)

 

will give the (main determination of) quantum state. You’ll notice x(0) no longer depends on ksi. This is because of the fundamental trigonometric relation cos² + sin² = 1.

 

So, what our experimenter actually do when he/she does his/her measurements is: he/she projects the observed quantum “entity” (here, a distance) onto a “corpuscular axis” and a “wavy axis”. If he/she works in more than 1 dimension, each of these axis become a space if not a space-time (with the same number of physical dimensions).

 

Yet, we forgot something important. We forgot that our experimenter is actually a human being and, as such, behaves as if he/she was a physical entity of the first projection only. This is because we see us as “mostly if not entirely substantial” and, as “substantial” rimes with “corpuscular”, we quite naturally place ourselves in the “corpuscular projection”. However, this is not what quantum theory tells us. Quantum theory says we should take into account the existence of a “double”, we, as “corpuscular observers” see as “wavy” and who stands in the second projection. But then, if we follow quantum theory and substitute the roles, this “wavy” observer, that “double” of ours, considers him/herself in turn as “substantial” as we consider ourselves in his/her own space(-time) and now sees us as “wavy doubles”. There’s a necessary reciprocity in the way things are interpreted by both observers, because this is merely a question of perception, but the quantum reality isn’t this: the quantum reality says

 

There exist a single entity, it’s neither “substantial” nor “etheric”, it’s “quantum” and it represents a brand new form of existence with no familiar equivalent.

 

Here’s what quantum theory says. It only remind us that, realizing experiments on the “corpuscular” or the “wavy” behavior of quantum objects is only aimed at reducing a reality we hardly grasp to more familiar behaviors, accessible to our perception.

 

There’s a widely used quantity to describe waves, it’s called the “wave number” k and it’s defined as the inverse of the wavelength multiplied by 2pi radians. These wave numbers are going to help us better visualize that complementary between our two observers’ perceptions.

 

We take our “corpuscular-like P1-observer” first, that’s us in common life. As we saw, he/she perceives the distance x1(ksi) as “corpuscular” because it stands in the same space(-time) as he/she. As he/she perceives x2(ksi) as a “wavelength”, he/she will associate it with a wave number, caution: in his/her space. That’s a k1(kappa): k1, referring to P1! So, he/she’ll write:

 

(5)               k1(kappa) = 2pi/x2(ksi)

 

Our “wavy-like P2-observer” will react the same in his/her space(-time): he/she’ll now see x2(ksi) as “corpuscular” and x1(ksi) as “wavy”, therefore associating a wave number:

 

(6)               k2(kappa) = 2pi/x1(ksi)

 

Indeed, the sine function can always be identified with a cosine one and (2) also writes:

 

(7)               x2(ksi) = x(0)cos(ksi – pi/2)

 

which corresponds to a “corpuscular distance” delayed a ¼ of a tour on the unit-radius circle. Conversely, x1(ksi) can always be identified with a “wavy distance” advanced a ¼ of a tour:

 

(8)               x1(ksi) = x(0)sin(ksi + pi/2)

 

and this 90° shift precisely corresponds to exchanging P1 and P2… so, you see the two are really complementary to one another and the distinction between “corpuscular” and “wavy” behavior is, in the quantum world, only a matter of perception

 

Why introducing a different quantum state for wave numbers? Because:

 

(9)               k(0) = (2pi){1/[x2(ksi)]² + 1/[x1(ksi)]²}1/2 = 2pix(0)/|x1(ksi)x2(ksi)|

 = 2pi/x(0)|sin(2ksi)|

(10)           kappa = -ksi

 

kappa is opposite to ksi.

 

Let’s now examine some particular values of ksi.

 

When ksi = 0, x1(0) = x(0) and x2(0) = 0: a P1-observer will perceive x(ksi) as “entirely corpuscular” and “ahead of him/her”. A P2-observer will perceive it as “entirely wavy”. Notice that x(0) is also the maximal distance both projections can reach, as sine and cosine functions stay between -1 and +1.

 

When ksi = pi/2 (90°), x1(pi/2) = 0 and x2(pi/2) = x(0): roles are permuted; a P1-observer will perceive x(ksi) as “entirely wavy”; a P2-observer, as “entirely corpuscular”.

 

When ksi = pi (180°), x1(pi) = -x(0) and x2(pi) = 0: same as for ksi = 0, except that x(ksi) is perceived behind these observers.

 

Finally, when ksi = 3pi/2 (270°), x1(3pi/2) = 0 and x2(3pi/2) = -x(0): same as ksi = pi/2, except for x(ksi) standing behind.

 

All other values of ksi are quantum, as they mix both projections.

 

We can proceed the same for anything else but space. It applies to time, masses, etc.

 

Quantum states, offering additional degrees of freedom, open onto new physical dimensions.

 

Basically, we have four “sectors”:

 

-         sector I, ksi is between 0 and pi/2, x1(ksi) and x2(ksi) are both positively-counted (both “ahead of observers”);

-         sector II, ksi is between pi/2 and pi, x1(ksi) is negatively-counted (“behind”) while x2(ksi) is still positively-counted (“ahead”);

-         sector III, ksi is between pi and 3pi/2, x1(ksi) and x2(ksi) are both negatively-counted (both “behind”);

-         and sector IV, ksi is between 3pi/2 and 2pi, x1(ksi) is positively-counted (“ahead”) again while x2(ksi) is still negatively-counted (“behind”).

 

When applied to something like space, it brings nothing we aren’t familiar with. When applied to time, first we find that concept of a “corpuscular” time made of (still hypothetical) particles we could name “chronons” and that concept of a “wavy time” that still doesn’t speak a lot to us. Let’s t(tau) be quantum time. Then:

 

-         in sector I, both t1(tau) and t2(tau) point towards the “future”;

-         in sector II, t1(tau) points towards the “past”, while t2(tau) still points toward the “future”;

-         in sector III, both t1(tau) and t2(tau) point towards the “past”;

-         in sector IV, t1(tau) points towards the “future” again, while t2(tau) still points toward the “past”.

 

So, we have this “alternance” between “future” and “past”, while “present” corresponds to tau = pi/2 or 3pi/2 from a P1-observer’s perspective [t1(pi/2) = t1(3pi/2) = 0] and to tau = 0 or pi from a P2-observer’s perspective [t2(0) = t2(pi) = 0]. But, as always, these are mere perceptions: in the quantum, there is no such thing as “past”, “present” or “future”. How could there be, if one is free to go “back to the future”?... :)

 

When applied to mass, it turns real weird for the experimenter. Let’s m(mu) be a quantum mass. Then:

 

-         sector I predicts m1(mu) and m2(mu) will both be positively-counted. A 20th-century experimenter would have interpreted this as a “particle of matter”;

-         sector II predicts m1(mu) negatively-counted while m2(mu) remains positively-counted;

-         sector III predicts m1(mu) and m2(mu) will both be negatively-counted. Our 20th-century experimenter would have interpreted this as a “particle of antimatter”;

-         finally, sector IV predicts m1(mu) positively-counted again while m2(mu) remains negatively-counted.

 

Our 20th-century observer, may he/she “belong to” P1 or P2 would have been for sure completely disoriented with sectors II and IV, because it just didn’t match his/her belief. For him/her, a quantum particle had either negative or positive energy at rest (which is equivalent to mass through the Einstein relation E = mc²), but whatever its sign, it would have concerned both the “corpuscular” and the “wavy” components. Now, to my knowledge, there is no selection rules yet to assert that both projections should have same sign… and, anyway, this is again a false problem, because the mass of a quantum particle at rest is the “pure mass at rest” m(0), which is always a non-negative quantity. So, the quantum principle applied to mass now tells us nothing else but this:

 

Opposite to our perceptions of things, there’s nothing in the quantum world as “antimatter”, i.e. “matter with negative energy at rest”. Instead, there is quantum matter with mass at rest m(0) in a quantum state mu.

 

And, according to the sector that quantum state is found, we interpret the mass components as being “signed”. However, there’s no “sign” in the quantum, there’s a position on the circle.

 

And this is mathematically proven: if you can attribute a definite sign to a single quantity, how can you to a pair of such quantities? For a number, you have two possible combinations: +x or -x; for a pair (x,y), you have four: (+x,+y), (+x,-y), (-x,+y) and (-x,-y). Only the first and fourth ones can be attributed a definite sign, because that sign is common to both components. But, what about the two others? You can’t…

 

On the contrary, depending on the position (the “angular operture”) you’ll occupy on the circle, you can immediately generate all four combinations… J

 

So, unless there comes selection rules to forbid (+,-) and (-,+) combinations, we can’t exclude them. Now, I emit serious doubts about the existence of such rules, because they would “spoil” the very definition of a quantum mass. And why would they apply to mass and not to space, to time and to anything else, then?... :|

 

I’d suggest a better explanation. PAM Dirac himself wasn’t satisfied with his own introduction of “particles with negative energies”, he preferred to see them as “anti-particles with positive energies”, drawing an analogy with solid-state physics, where “holes” in the “energy sea” replaced particles with positive energies. At that time (1920s), it was still assumed that free particles had to have positive energies or, at least, zero. States with negative energies were attributed to linked systems. And, as the wave-corpuscle duality was proposed precisely because one couldn’t separate the corpuscle from the wave anymore, then, all logically, people assumed that, if a component had a sign, the other one should carry the same. But keep in mind this was in a non-oscillating space, non-oscillating time. Making everything oscillates changes the entire picture… We don’t need to struggle with “particles” and “anti-particles” anymore, we now understand better why projections are not reality at all, but severe reductions of it,… We work in a radically transformed frame… We can see there’s no objection to having a “corpuscle” with positive energy and a “wave” with negative energy: the two do not interact with each other, there’s no “two”, there’s one and that one is just allowed to carry two signs instead of a single one…

 

Should this shock the community? I don’t think so. After all, gluons carry two colors… :|

 

All experiments throughout the past century were made out of the calculations from a 4D space-time. Surely, an 8D one should lead to radically different results…

 

So, maybe we didn’t discover quantum particles in mass sectors II and IV because we didn’t search for them… because our experiments were based on assumptions that quantum particles should only belong to sector I or III… because our theoretical models all founded on mirror symmetry… and from the time you impose a quantity like energy to remain a real number and not a pair of real numbers… well, you necessarily limit your possibilities…

 

Next time, I’ll talk about areas and volumes.

 

 

Minibluff the card game

Hotels