blogs.fr: Blog multimédia 100% facile et gratuit

doclabidouille

Blog multimédia 100% facile et gratuit

 

BLOGS

Blog dans la catégorie :
Sciences

 

Statistiques

 




Signaler un contenu illicite

 

doclabidouille

B 144: NO MORE "SPINOR SUB-STRUCTURE" THAN BUTTER IN BRANCH...

Le 27/05/2018

This is a quick remark about “spinors and space-time”.
 
It is usually assumed (or did I get it wrong?) that the special status of the physical space-time to be four-dimensional allows one-to-one correspondences between it and non-commutative 2-dimensional complex structures known as “spinors”.
 
I strongly disagree with that argument. The correspondence in question:
 
  1. ya = theta*AsigmaaABthetaB
 
where small Latin indices run from 1 to 4 (or 0 to 3) and capital ones, from 1 to 2, is a contracted invariant product over the last ones. So, it can be used in any dimension and shows nothing “specific” to the dimension 4. Instead of C² as the “spin space” and SU(2) as its invariant group, we can equivalently consider Cn and SU(n), for any n in N, it won’t change anything to the above formula, which can also be applied to any commutative and real-valued manifold of dimension d:
 
  1. ya = theta*AsigmaaABthetaB (a = 1,…,d; A,B = 1,…,n)
 
and this is consistent with the well-known fact that “any particle with spin s is represented in its reference frame at rest as a symmetric spinor of rank 2s with 2s+1 components, whatever the value of s”. For s = ½, one finds 2-component vectors; for s = 1, M2(C) symmetric matrices with 3 independent components; etc.
 
It follows that the above correspondence, not only have no specificity with the “external” dimension 4 (in terms of symmetries), it also makes no difference between spinors and tensors, that is, between fermions and bosons…
 
Geometrically, it means it does not define any “anti-commutative sub-structure to the (pseudo-)Euclidian structure of Minkowski space-time or E4 after performing a Wick rotation”.
 
In practice, it means it brings me nothing more able to be used to “extend” or “refine” the properties of “classical space-time”… K
 
Hence this remark.
 
If I use Pauli’s original spin-space, it will be endowed with a skew-symmetric metric JAB = -JBA, associated with a spin ½. If I use a spin 1, I’ll simply double Pauli’s indices, obtaining matrix coordinates thetaAB in M2(C) in place of the former thetaA (symmetric, 3-component, analogue to a vector of EC3, the 3D complexified Euclidian space) and metric JABCD = -JCDAB = JBACD = JABDC (3 components as well). The Grassmann property will write VAWA = -VAWA for spin ½ and VABWAB = -VABWAB for spin 1… The first one will imply VAVA = 0, while the second one will give VABVAB = 0, which is not equivalent to V² = 0 since, in Euclidian 3-space, the metric is symmetric.
 
Actually, VABC…VABC… = 0 under a symplectic structure is perfectly normal for any completely symmetric V of rank 2s, whereas it leads to VABC… = 0 under a Riemannian structure [and a null cone under a pseudo-Riemannian one with signature (1,n)].
 

 

B 143: Search for a 2nde UNIVERSAL frame...

Le 23/05/2018

I’ve been turning around the pot since the very beginning of this blog, several years ago (except, of course, for articles about finance). My central concern is to find the proper universal frame that will complement space-time. This shows the hardest task. I tried many approaches, quantum physics, space-time relativity,… yet couldn’t find anything satisfying me enough. Indeed, as I repeated it many times, our best “witness” for parapsychological events is the Near Death Experiment (NDE). And the process seems formal on one point: in order to understand what can happen then while staying consistent with neurobiological datas, we need two bodies. Mind cannot be the candidate. Mind is a purely neurochemical process, it’s fully part of the biological one.

 

But we also need two physical frames or we wouldn’t be able to explain why the biological body would not be involved in the NDE process, while the experiencer would discover a “second body, of a different nature”. And that second body is apparently not perceived by the medical team around. Now, directly observable or not, if that second body was in the same space-time as the biological one, its presence alone in the same room as the medics would be enough to induce “disturbances” in the room they would perceive, even if absorbed in their task. Make that simple experience again: look straight to the neck of someone walking ahead of you and, more than 4 times over 5, that person will suddenly turn back. Worth trying if you never did. It works and pretty well. So, if this can work despite there’s no direct “influence” (no “field effect” we say in physics) between the observer and the observed, you can easily convince yourselves (and these are laws of physics) that, exerting a direct influence around you through a “field of forces” would be perceived “almost for sure”, especially in a “confined” room…

 

Now, this is not what is reported, neither by patients, nor by the medical staff. Instead, patients feel themselves “floating above their (biological) body”. However, they can see and hear everything going on inside the room, they can even see under the table; some left the room, went in corridors and still saw and heard everything,… but, in none of these circumstances did anybody report he/she “felt an unobserved presence” back.

 

There’s an apparent “contradiction” somewhere, right? On one hand, we should have an “aetheric body leaving the biological one”, which would suggest they originally were inside the same space-time and, on another hand, we have the same aetheric body who would be like behind a “semi-transparent mirror”, able to see and hear everything, yet nothing passing through.

 

The only physically consistent way out would be to consider two space-times, one where the biological body is and one where the “aetheric” body is.

 

But this is not as simple, as it would still not explain why the aetheric body could perceive while “biological livings” wouldn’t (or even couldn’t!). Hence that intensive search for this second space-time and for a larger universe too, that would include both space-times and both bodies.

 

The difficulty now is to find a second frame that would be as universal as space-time. Physics says a lot about specific frames, but almost nothing about another universal one. Here’s the general context, common to ”classical” as to “quantum” physics: there now exists a legion of physical field inside 4D space-time, these are all parametrizations of the form f(x), where x is a space-time coordinate. Such parametrizations can send back to generalizations of the initial Galilean motion x(t) in 3-space. We can find fields like f with many components, not necessarily linked with space-time. Comparing f(x) to x(t) may incite to think of the object “f” as a coordinate in another frame, different from space-time, since fields are usually not measured as lengths. The point is: each “additional frame” built this way is specific. For instance, the “electromagnetic space-time” using the four Maxwell potentials Ai is specific; Einstein’s “gravitational space-time” using the ten “potentials” gij is specific; Pauli’s “spinor space” using the two complex-valued psiA is specific… I thought once that the one-to-one correspondence between spinor coordinates thetaA and space-time ones yi, yi = theta*AsigmaiABthetaB, could serve as a “second space-time”, but this construction actually refers to the original space-time itself: it says that, “under” the 4D commutative macroscopic structure of space-time described by “classical” physics”, there’s a more fundamental, 2D, anti-commutative and wavy microscopic sub-structure that is spinor and which is actually able to generate that “continuous” 4D “space-time tissue” at large scales… In other words: the correspondence between spinors and 4-vectors can be used in the same space-time, it does not require nor generate a “second one”… K

 

Physics thus offers a plethora of “non-space-time” possible frames, but nearly all of them have nothing “universal”, they all refer to producing sources… This doesn’t make a frame. Space-time is something that can stand by itself, even in the classical approach: it’s an environment that can be completely empty and still be, proof that it’s not related to any source. You’ll tell me: “but fields in the vacuum are waves and they therefore depend on no characteristic like mass, charge,… of sources; they could become a candidate…”

I’ll reply: “no, because your ‘waves’ actually aren’t… I thought there was, there isn’t anything like a ‘source-free field’. This is again a classical idealization. If you look only at semi-classical interacting models, you’ll immediately see that, taking vacuum states into account eliminates all ‘waves’, because vacuum states interact with fields and act as a source term…”

The concept of “waves” only comes from the fact that the vacuum is neglected in the classical approach and associated with “nothingness”…

In fact, they are purely mathematical solutions, due to determinism. As soon as you take a statistical approach, you find fluctuations and those fluctuations, that do not vanish, act as a source.

It’s even so blatant that vacuum fluctuations can change the configuration of a system!

They can make it flip from one state to another…

 

No. I went back and forth, round and round, again and again and the only frame I’ve heard of that meets the requirements is the spectral one… That one is universal. There are former bidouilles about it, but I’d like to make another synthesis, because I feel I didn’t go deep enough in the physical content or I didn’t interpret it in the suitable way. We can actually make a geometrical synthesis between at least three approaches: oscillations, complex-number theory and spectral analysis.

 

 

 

 

B 142: QUANTUM THERMO (2)

Le 23/05/2018

Since quantum probabilities were assumed to be oscillating as all the rest, there are basic properties needing re-examination in order to understand the concept of statistical motion at the foundations of the microscopic “cement” of thermodynamics and heat transfers.

 

For two classical events A1 and A2 with probabilities of occurrence P1 = P(A1) and P2 = P(A2), the fundamental properties of probabilities were defined as such:

 

(1)               P1 + P2 = 1

(2)               P(A1 AND A2) = P1P2

(3)               P(A1 XOR A2) = P1 + P2

(4)               P(A1 OR A2) = P1 + P2 - P(A1 AND A2)

(5)               P(A1|A2)P(A2) = P[(A1|A2) AND A2] = P(A2|A1)P(A1) = P[(A2|A1) AND A1]

 

Some comments, now.

 

Property (1) is known as the “normalization condition”, it says that the sum of probabilities linked with each event must be equal to 1, that is, we can be sure at least one of them is to occur.

 

Property (2) says the probability for two independent events to conjointly occur is equal to the product of the probabilities for each event, separately.

 

Property (3) says the probability for two disjoint events to occur is simply the sum of the probability of each event to occur. There’s a natural limitation here, due to the fact that the result, P1 + P2, remaining a probability, must be found between 0 and 1. In the case where there are only two events, property (1) guarantees the result is exactly equal to 1.

 

Property (4) is already more complicated, it says that the probability for at least 1 over 2 independent events to occur is equal to the probability (3) minus the probability (2).

 

Finally, property (5) is known as the Bayes rule, it’s about conditional probabilities: A1|A2 stands for “the realization of event A1 is submitted to that of event A2”. P(A1|A2) then measures the chance that conditioned event A1|A2 is to occur. As you can see, there is a symmetry between the probability the conjoint event (A1|A2) AND A2 and its “reciprocal” (A2|A1) AND A1 in the sense they have equal chance to occur.

 

It must be emphasized here that these basic properties of classical probabilities are the same as “cardinal numbers” in set theory: in this mathematical theory, a “cardinal number” is a number that measures the total number of elements of a given set. In some way, probabilities of occurrence are a measure of the total number of elements of non-deterministic sets or “Borel sets”, reported back to the closed interval [0,1] (the deterministic situation corresponding to the Boolean pair {0,1}). As such, one expects they follow the same rules as cardinals, which reveals to be the case, as soon as events are then considered as algebraic sets. This to say that there’s not only a physical justification to properties 1 to 5, there’s also and overall a much more formal mathematical one.

 

Properties (1-4) easily generalize to N classical events A1,… AN with probes P1,…,PN:

 

(6)               Si=1N Pi = 1

(7)               P(ANDi=1M Ai) = P1…PM             (1 =< M =< N) 

(8)               P(XORi=1M Ai) = Si=1M Pi             (1 =< M =< N)

 

To generalize (4), a bit of explanation, as the process is iterative. For three events, one has:

 

P[(A1 OR A2) OR A3] = P[(A1 OR A2)] + P(A3) - P[(A1 OR A2)]P(A3)

= (P1 + P2 + P3) - (P1P2 + P2P3 + P3P1) + (P1P2P3)

= P[A1 OR (A2 OR A3)] = P(A1 OR A2 OR A3)

 

For four events,

 

P(A1 OR A2 OR A3 OR A4) = (P1 + P2 + P3 + P4) - (P1P2 + P2P3 + P3P1 + P1P4 + P2P4 + P3P4)

                                                + (P1P2P3 + P1P2P4 + P1P3P4 + P2P3P4) - (P1P2P3P4)

 

One can see expressions are completely symmetric with respect to the events. The explanation lays in the fact that the terms I voluntary placed between brackets are the coefficients of the (algebraic) polynomial of degree M, (P’ - P1)…(P’ - PM). Indeed, for M = 2:

 

(P’ - P1)(P’ - P2) = P’² - (P1 + P2)P’ + P1P2 = P’² - c1P’ + c2

 

so that (4) rewrites P(A1 OR A2) = c1 - c2. For M = 3,

 

(P’ - P1)(P’ - P2)(P’ - P3) = P’3 - (P1 + P2 + P3)P’² + (P1P2 + P2P3 + P3P1)P’ - P1P2P3

     = P’3 - c1P’² + c2P’ - c3

 

and P(A1 OR A2 OR A3) = c1 - c2 + c3. You got it now: P(A1 OR A2 OR A3 OR A4) = c1 - c2 + c3 - c4 and so on, where the cis all depend on the M probabilities Pj. As a result:

 

(9)               P(ORi=1M Ai) = Sk=1M (-1)k-1Si(1)=1M-k+1Si(2)=i(1)+1M-k+2…Si(k)=i(k-1)+1M Pi(1)…Pi(k)

(1 =< M =< N)

 

As for Bayes, it generalizes into:

 

(10)           P(Ai+1|Ai)P(Ai) = P(Ai|Ai+1)P(Ai+1)           (1 =< i =< N)

 

 

The now “cyclic” (…) question is: what does this all become in the quantum?

 

Let Ai(ALPHAi), 1 =< i =< N, be N quantum events with probabilities of occurrence Pi(PIi) = P(PI)[Ai(ALPHAi)]. According to the rule on the sum of pairs:

 

(11)           Si=1N [Pi(0),PIi] = [P(0),PI]

(12)           [P(0)]² = Si=1N [Pi(0)]² + 2Si=1N-1Sj=i+1N Pi(0)Pj(0)cos(PIi - PIj)

     = [Si=1N Pi(0)]² - 4Si=1N-1Sj=i+1N Pi(0)Pj(0)sin²[½(PIi - PIj)]

(13)           tan(PI) = [Si=1N Pi(0)sin(PIi)]/[Si=1N Pi(0)cos(PIi)]

 

Pi(0) = P(0)[Ai(0)] is the probability the classical event Ai(0) occurs. PIi is the quantum state of the probability Pi(PIi) = P(PI)[Ai(ALPHAi)] the quantum event Ai(ALPHAi) occurs (while ALPHAi is the quantum state of this event itself). As:

 

(14)           (1,0)/[P(0),PI] = [1/P(0),-PI]

 

the quantum equivalent to the normalization condition (1) writes,

 

(15)           [1/P(0),-PI]Si=1N [Pi(0),PIi] = Si=1N [Pi(0)/P(0),PIi - PI] = (1,0)

 

We can check it includes negative probabilities. Indeed, for PI = pi, [P(0),pi] = -P(0), tan(pi) = 0 and according to (13), this corresponds [together with tan(0) = 0] to a “Fresnel-like” relation:

 

(16)           Si=1N Pi(0)sin(PIi) = 0

 

Conversely, when all PIi are equal and equal to pi, all Pi(PIi) = -Pi(0), relation (16) is automatically fulfilled, P(0) = Si=1N Pi(0), PI = pi (it cannot be 0) and Si=1N [Pi(0)/P(0),0] = Si=1N Pi(0)/P(0) = (1,0) = 1 becomes a classical tautology. We therefore needs to precise that, if the total number of classical events likely to occur is N, then Si=1N Pi(0) = 1.

 

Extending (7) is easy. According to the quantum product, it’s simply:

 

(17)           P(PI)[ANDi=1M Ai(ALPHAi)] = P1(PI1)…PM(PIM)                       (1 =< M =< N)

(18)           P(0)[ANDi=1M Ai(0)] = P1(0)…PM(0)

(19)           PI[ANDi=1M ALPHAi] = Si=1M PIi

 

Amplitudes multiply, giving back classical property (7), while quantum states add.

 

To quantize (9), we need to extend (alternated) sums of products. Si1=1M Pi1(PIi1) is done. Let’s call it P’1(PI’1) = [P’(0),PI’1], not to confuse it with P1(PI1). Then:

 

Si1M-1Si2=i1+1M Pi(1)(PIi(1))Pi(2)(PIi(2)) = [P’2(PI’2)]² = {[P’2(0)]²,2PI’2},

Si1M-2Si2=i1+1M-1Si3=i2+1M Pi(1)(PIi(1))Pi(2)(PIi(2))Pi(3)(PIi(3)) = [P’3(PI’3)]3 = {[P’3(0)]3,3PI’3},

Si(1)=1M-k+1Si(2)=i(1)+1M-k+2…Si(k)=i(k-1)+1M Pi(1)…Pi(k) = [P’k(PI’k)]k = {[P’k(0)]k,kPI’k},

 

and the highest contribution, k = M, was done too, in (17). Finally:

 

(20)           P(PI)[ORi=1M Ai(ALPHAi)] = Sk=1M (-1)k-1[P’k(PI’k)]k                 (1 =< M =< N)

    = -Sk=1M [P’k(PI’k + pi)]k

 

is the quantum formulation for the probability of M quantum events to occur or not. Decomposing it into classical amplitudes and quantum states won’t lead to simple formulas at all, being given that the classical formulation (9) is already rather complicated.

 

Bayes becomes:

 

(21)           P(PI)[Ai+1(ALPHAi+1)|Ai(ALPHAi)]P(PI)[Ai(ALPHAi)] = P(PI)[Ai(ALPHAi)|Ai+1(ALPHAi+1)]P(PI)[Ai+1(ALPHAi+1)]        (1 =< i =< N)

 

Amplitudes gives classical Bayes back, while quantum states verify:

 

(22)           PI(ALPHAi+1|ALPHAi) + PI(ALPHAi) = PI(ALPHAi|ALPHAi+1) + PI(ALPHAi+1)

 

which is automatically satisfied for:

 

(23)           PI(ALPHAi|ALPHAi+1) = -PI(ALPHAi+1|ALPHAi)

 

giving,

 

(24)           PI(ALPHAi+1|ALPHAi) = ½ [PI(ALPHAi+1) - PI(ALPHAi)]

 

with the straightforward consequence that,

 

(25)           PI(ALPHAi|ALPHAi) = 0

 

A normal result after all, since an event cannot be conditioned to itself prior to occur…

 

(classical Bayes says nothing about this, as the relation reduces to a mere identity)

 

 

Means values, variances and higher momenta

 

Let now x stand for a classical statistical variable able to take N discrete values xi with the probability Pi = P(xi). The (statistical) mean value of x is the number:

 

(26)           <x> = Si=1N xiPi

 

and the momentum of order m of x (or “mth-momentum) is the statistical mean value of the mth-power of x:

 

(27)           <xm> = Si=1N (xi)mPi

 

These very general definitions can be readily extended to the quantum under the form:

 

(28)           <[x(ksi)]m> = Si=1N [xi(ksii)]mPi(PIi)  ,  Pi(PIi) = P[xi(ksii)]

 

assuming we now consider a quantum statistical variable x(ksi) = [x(0),ksi] likely to take N discrete values xi(ksii) = [xi(0),ksii] with probabilities Pi(PIi), I = 1,…,N. However, we need be careful of something, as x is no longer deterministic, but statistical, that is, random: opposite to a deterministic variable with a series of N values to take, we’re no longer sure in advance the value xi (in the classical) or xi(ksii) (in the quantum) is going to occur. We can only predict it will, with a chance of realization Pi [or Pi(PIi)]. If, in the classical, this has no other consequence than “being unknown in advance”, it does have in the quantum, when we’re going to evaluate mean values, because of the summation over the N states. As we know, this summation is going to induce interferences between terms. Now, intuitively, how can we conceive an interference between a value that is indeed going to occur (in a near future) and another, that will not? Or, worse, between two values that both won’t?... :|

 

Let’s take N = 2 and m = 1 as an illustration. We then have that mean value:

 

(29)           <x(ksi)> = x1(ksi1)P1(PI1) + x2(ksi2)P2(PI2)

         = [x1(0)P1(0) , ksi1 + PI1] + [x2(0)P2(0) , ksi2 + PI2]

 

According to (12b) for N = 2, the amplitude of that mean value is therefore:

 

(30)           <x(0)>² = [x1(0)P1(0) + x2(0)P2(0)]² -

- 4x1(0)x2(0)P1(0)P2(0)sin²{½[ksi1 - ksi2 + PI1 - PI2]}

 

and its quantum state:

 

(31)           tan(<ksi>) = [x1(0)P1(0)sin(ksi1 + PI1) + x2(0)P2(0)sin(ksi2 + PI2)] /

[x1(0)P1(0)cos(ksi1 + PI1) + x2(0)P2(0)cos(ksi2 + PI2)]

 

These are the values we expect. Mean values are tendencies. After observation, what if x1(0) occurs, but not x2(0)? Then, the result we’ll observe will have become P1(0) = 1, P2(0) = 0 and <x(0)> = x1(0), tan(<ksi>) = tan(ksi1 + PI1) = tan(ksi1) since PI1 will be zero and finally, <ksi> = ksi1 modulo pi, so that <x(ksi)> will either be x1(ksi1) or -x1(ksi1). So, here we are, with a value x2(ksi2) we predicted (because we couldn’t do otherwise) but did not concretize.

 

Where does the interference term in (30) actually come from, then?

 

From our own prediction process and nowhere else. Should we have made no prediction, should we simply have awaited for the results to come out, we would have found no interference anywhere, because results are submitted to a chance of realization. So:

 

Only in the deterministic are interferences unavoidable.

In the statistical, because results are pondered with chances of concrete realization, interferences are only due to the prediction the observer makes.

 

When none of these two events occur, our theoretical predictions (30-31) fall completely aside, since the observed result is then <x(0)> = 0 and tan(<ksi>) = 0 implying <x(ksi)> = 0…

 

So, extremely careful with probabilities, because we only try to guess results. And, if we take them too much for granted, in the quantum, it will even induce false models containing artificial interferences… :| Instead, always keep in mind that:

 

THE PREDICTION IS NOT THE RESULT.

 

And the worst of the worst would be to “transfer probabilities from the initial values they were affected to to the trigonometric function characterizing the interference term”: this would be total nonsense… :(

 

Well, maybe here and to the disappointment of technicians, the technical impossibility for us to measure correlated quantities with maximum accuracy led the 20th-century physicists onto an inappropriate road, when they assimilated that quantum measurement was to be accepted as fundamentally statistical, just because “one couldn’t know in advance”, despite they, meanwhile, accepted the fact that statistics based on that spectroscopic limitation “had nothing quantum in itself”. Statistics is found everywhere in the Universe… The motion of meteors in the solar system is entirely statistical… Nature doesn’t “all of a sudden” turn statistical because we cannot measure both the signal and its frequency spectrum on correlated quantities… I even go up to think that, if we had a more powerful mathematical tool than the linear integral, maybe (I say: maybe) we could find exact solutions to the many-body problem over 2 bodies without needing to introduce statistics… Poincaré the first recognized that motions in systems with more than 2 bodies couldn’t be determined because the problem was “not integrable in quadratures”… so that we had no deterministic tool to show us the shape of the general solution… This doesn’t mean that because we lack such tools, Nature should be statistical…

 

“Quantum” means “naturally, spontaneously, oscillating”. It doesn’t mean “statistical”, “non-commutative” or else… Again, in the solar system around us, some motions are non-commutative, because they’re bounded!... :)

 

Another pre-conceived idea in quantum theory was that the “non-relativistic vacuum state was Gaussian”. “Non-relativistic” in the sense it followed Galileo’s relativity of space only. The “vacuum state” was the state of lowest energy, with no field particle present, only (guess what?) “statistical fluctuations”. And Gaussian? I’m sorry, but you take any introductory book to probabilities and statistics, you’ll find written in it that the Gaussian distribution (bell shape) is an approximation (only an approximation) of the much more general binomial distribution, when the total number of samples is very high and values, extremely closed to their mean value… That’s too significant (not to say “severe”) restrictions. To put it differently, the Gaussian distribution has nothing universal at all… and we made it a rather universal feature of quantum vacuums… In the theory of the “quantum oscillator”, for instance, it clearly appears the “wavefunction” of the vacuum is a Gaussian and “excited modes” (where particles are produced), derivatives of that Gaussian… wow… and people were surprised, by the end of the 1990s, that the whole building collapsed when confronted to astronomical datas… :|

 

I’m sure a lot think from the beginning I’m “wasting my time” re-examining, one by one, the fundamentals, the basis, of quantum theory. But it appears that we went very, very far away from all these fundamentals. And what this global re-examination is showing me up to now is that I don’t get the same lecture of equations as the one I can find in all my literature about quantum theory, “relativistic” or not… :|

 

For “intellectual recreation”, you can still calculate <[x(ksi)]²> and the variance, for N events. You’ll come up with the same conclusion: artificial interferences, but for all probas to 0 or 1… Why don’t I do it? For always the same argument: because taking probabilities as a physical reality and not as a mere mathematical tool as it should be induces that fake image of mean values being “bits of mixed values”. Look again at (26): if we consider Pi as having some physical content, then we’re led to believe that xiPi is “a bit of xi”; as a consequence, we’ll get that picture, maybe unconsciously, that <x> is a “sum of bits”, giving another “bit”… which is absolutely not reality. It only gives us a global tendency of a reality we expect.

 

Where it becomes really concerning and even dreadful is when, in finance, social or economics, it’s turned as master rulesK As if it was the way flows behave… As if forecasting the weather over five days, despite now, a physical limitation due to chaos, was “natural” because our statistical models say it

 

Wait… we’re making here predictions a reality… :| we’re making expectations certainties… “it’s got to be this way, because all tendencies converge to this, our models say…”

 

We feel it easier, here, to see the world around us as we’d like it to be rather than as it is, whether we like it or not…

 

“oh, look at these formulas: quantum theory extremely complicated, especially when you introduce time relativity, lengthy expressions you find nowhere else, not even at the IRS… (did I mention it? No, I pay my taxes like anybody else, no worry…), it requires the latest super-computers… and still, you make them smoke…”

 

:|… The fundamental can only be poor… :) and when you take away from quantum theory all that is not quantum… what remains look basic, childish… as any basic environment… J

 

The world wasn’t made complicated, it was made simple.

 

And it rather seems that the most complicated for the human mind is to make simple… J

 

We built a society where, if you make too simple, you’re asked: “what did you study for, all these years?...”

 

And an “Eastern-style answer” like: “I’ve studied to understand, work this knowledge, to realize, in the end, that I had to go back to the most fundamentals and do it all again” is just not toleratedJ

 

Instead, the “popular” reaction is: “it’s impossible, it just can’t be as simple as that…”

 

Except that Nature 1) was born well before us and 2) doesn’t care at all what we thinkJ

 

We’re mere observers. We learn from it, not the converse.

 

Nature has nothing to learn from us so far, but about our genuine appetite for (self)-destruction, absurdities, useless complications… and self-satisfactions.

 

I’m satisfied with doing as simple as I can. And if I could do even simpler, I’d do it.

 

As R.P. Feynman used to say in his lectures: “the equation of Nature is U = 0; the problem is, we don’t have a clue what U is…” J

 

This could sound as a criticism, when I was younger, I’d have agreed, now I grew older, it’s not, it’s a mere constatation. And, somewhere, maybe… it’s worse. J

It’s worse, because we’re destroying a civilization that also made great things. Only because “we wanna be God before God”… L but that’s the way it is, I fear we went too far to go back and that’ll be my final word.

 

 

 

B 141: QUANTUM THERMO 1

Le 15/12/2017

We now turn to a topic I want to study for quite a long time: thermodynamics. It may interest financers as well, as the principles of thermodynamics and statistical physics find applications in finance, through stochastic processes. As we’re going to see it, the extension of thermodynamics to the quantum leads to very different conclusions and requires a review of well-anchored pre-conceived ideas about probabilities, temperatures and entropies (or “degrees of disorder”).

 

We begin with the concept of probability.

 

Classically, a probability P(0) is the chance an event has to occur. It measures the lack of certainty we have before that event to occur or not. When we can be sure it’s going to occur, P(0) = 1, meaning the chance of occurrence is 100% and we call the event deterministic (i.e. such that it can be determined in advance with certainty). On the other end, when we’re sure it cannot occur, we call its realization impossible and P(0) = 0. So, classically, we can understand a negative probability would have no meaning, for it would mean that the event in question is “even more impossible to occur”, which is absurd in itself: once it’s proven impossible to occur, it’s simply impossible, period.

 

Geometrically now, any physical quantity can be attributed a space, which does not need to be physical. Non-physical spaces are called “abstract” and only serve to visualize things. So is the case for a 1D “probability space”, which is a purely mathematical one, identified with the closed interval [0,+1]. The classical theory of probabilities tells us that it’s symmetric, [-1,0], is to remain empty, geometrical translation of “there exists no negative probabilities”. So, we have kind of a “polarization” here, where the entire content is in [0,+1] and there’s nothing “on the other side of the zero”.

 

In the quantum, things become different. The classical probability P(0) only represents the amplitude of the quantum probability:

 

(1)               P(PI) = [P(0) , PI]

 

P(0) continues taking its values into [0,+1], but the presence of a quantum state PI now demands that there’s, not only a prolongation of this interval to a symmetric [-1,0], but that the so prolonged axis [-1,+1] be doubled. When PI = 0, we’re in the initial segment [0,+1]. According to what we saw in B140, when PI = pi/2, axis are permuted, so that this segment is found on the “wavy” axis, while the “corpuscular” probability falls down to 0. What this actually says is that the chance to see a quantum event displaying a substantial way is now null, so that this event is to be expected as purely “wavy”, with a probability of occurrence P(0). But, when PI = pi, axis are turned upside-down, so that the initial [0,+1] segment becomes the [-1,0] one. As a result, [0,+1] is now empty. It has been emptied of all its elements, transferred to its symmetric with respect to zero. So, when PI = pi, the classical situation is somehow “reversed” and P(pi) = [P(0),pi] probabilities are only defined between -1 and 0. Positive ones do not exist, as they would, in turn, be “even more unlikely than the impossible”, which would again sound absurd… :) Finally, when PI = 3pi/2, we have a combination of reversion and permutation of axis, the event is expected as purely wavy with a negative probability -P(0) of occurrence and no chance to appear under a substantial form.

 

In the quantum, the “positive” is simply the unsigned, the “negative” is a “positive” in phase opposition and the “positive”, the “negative” in phase opposition.

 

This is what is called in mathematics an involution:

 

-(-1) = +1 = 1

 

An operator (here “change of sign”) applied twice gives back the original result. In other words, it “neutralizes itself”. Here, a first application of “change the sign” turns 1 into -1; a second application annihilates the first action: --1 = +1 = 1. An involution can thus be seen as an alternance of creations and annihilations: ---1 = -1, etc.

 

If we project P(PI) onto its “corpuscular” and its “wavy” axis, we will find oscillating probabilities:

 

(2)               P1(PI) = P(0)cos(PI)  ,  P2(PI) = P(0)sin(PI)

 

always smaller (in unsigned values) than the value classically calculated: in the quantum, the classical is always the “most optimistic” measure. So, there’s already a correction to be brought between probabilities of occurrence predicted on a classical approach and those predicted on a quantum one. That correction is obviously due to taking quantum states into account, which always review results to the decrease.

 

With the concept of probability comes that of entropy. Entropy is a measure of the lack of information about an event (Shannon’s ‘paradigm’, generalizing Boltzmann’s definition). Classically, it’s defined as:

 

(3)               s(0) = -kBP(0)Ln[P(0)]

 

Ln(.) is the “natural” logarithm, the reciprocal function to exponentiation exp(.) or elevation of the irrational e = 2,718281828456… to a power. As P(0) always stands between 0 and 1 and Ln(1) = 0, the logarithm of P(0) is always negative, hence the minus sign in (5), for entropy needs to remain a non-negative quantity. kB is Boltzmann’s constant, approximately 1,38 x 10-23 J/K (Joule per Kelvin). Why should s(0) never turn negative? For always the same reason. If P(0) = 1, the event is certain and (5) gives s(0) = 0: no disorder, no lack of information. If P(0) = 0, the event is impossible (we’re sure of this too), zero elevated to the power zero gives 1 by convention (and graphic behavior), so that P(0)Ln[P(0)] = Ln[P(0)P(0)] = Ln(1) = 0 and s(0) = 0 again: no more disorder than for P(0) = 1, the situation is just opposite in chances of realization [besides, considering the probability P’(0) = 1 - P(0) of non-realization and applying the entropy formula to it would bring you back to the P’(0) = 1 situation]. For s(0) to turn negative, we would need (as kB is positive) Ln[P(0)P(0)] > 0 and P(0)P(0) > 1, which is impossible in the limits [0,1].

 

It’s a general feature of quantum theory that the question of the physical constants occurs. In the case of kB, it becomes the amplitude kB(0) = kB of a quantum coefficient kB = [kB(0),kappaB]. Is the quantum state kappaB to be a constant as well or, on the contrary, should we let it vary? I don’t know. I know of no experiment where kB could vary. But we can leave the phase.

 

The second point I noticed when working on the quantization process was that elementary mathematical functions defined first in the classical should be extended to the quantum and not be used as still-classical functions. Hence B139 and this is logical after all, as all complex-valued functions of a complex-valued variable develop as:

 

f(phi)[x(ksi)] = f(0)[x(0),ksi]exp{iphi[x(0),ksi]}

 

The amplitude f(0)[x(0),ksi] is not even the initial classically-defined function: that one is re-obtained fixing the phase ksi of the variable to zero and the phase phi[x(0),ksi], which then takes the particular value phi[x(0),0] to the UV limit 0 or pi (so as to allow both signs). These are quite severe conditions, showing the set of quantum functions of a quantum variable is much wider than the set of classical functions of a classical variable, making the use of classical functions of quantum variables a dubious extension (I didn’t mention it again in my discussion about Riemann surfaces in B140, but classical functions are still used in that geometrical description as well…).

 

It appears that what seems to be the most proper way to define quantum entropy is to propose the following formula (in polar representation, see B139 example 2 for technical details):

 

(4)               s(sigma) = [s(0),sigma]

(5)               s(0)[P(0)] = kB(0)P(0)|Ln[P(0)]|

(6)               sigma[P(0),PI] = kappaB + PI + Lambdaeta[P(0),PI] + pi

 

In the definition (5) of the amplitude of quantum entropy, I used the classical logarithm of the classical probability P(0). This is not a problem at all, since s(0) and sigma are classical components anyway and it’s always possible to use the amplitude of the quantum logarithm reversing formula (B139-32):

 

(7)               {Ln[P(0)]}² = {Ln(0)[P(0),PI]}² - PI² >= 0

 

Why not using Ln(0)[P(0),PI]? Because of (B139-36a), which predicts an amplitude s(0)(1,PI) = kBPI for classically sure events. In itself, there’s nothing absurd as, for PI = 0, s(0)(1,0) = 0 as in the classical. But, for PI = pi, P(pi) = -1 would be given a “lack of information” s(0)(1,pi) = kBpi, whereas this event is still considered as certain to occur. So, this would actually break the symmetry we’ve just obtained between positively-counted probabilities and negatively-counted ones, while nothing physical would justify it (on the contrary). That’s why, with the concern of keeping that symmetry, I preferred to slightly modify the classical expression, first, to take the quantum state PI of the probability into account in the expression of quantum entropy and, second, to retrieve the same results as in the classical for P(0) = 0 and P(0) = 1, if to obtain an s(0) independent of PI. As for the former minus sign in the classical expression (5), it’s now included in the quantum state sigma of quantum entropy, since -1 = (1,pi).

 

Consider a quantum system. Then, s(sigma) is a (quantum) measure of its disorder,

 

(8)               s1(sigma)[P(0),PI] = s(0)[P(0)]cos{sigma[P(0),PI]}

 

is a measure of its substantial disorder and

 

(9)               s2(sigma)[P(0),PI] = s(0)[P(0)]sin{sigma[P(0),PI]}

 

a measure of its wavy disorder. Both “projective” measures can now turn negative. As for probabilities, we can understand where it comes from: a phase opposition on the quantum state of entropy. Indeed:

 

(10)           s1(sigma + pi)[P(0),PI] = s(0)[P(0)]cos{sigma[P(0),PI] + pi}

        = -s1(sigma)[P(0),PI]

 

and

 

(11)           s2(sigma + pi)[P(0),PI] = s(0)[P(0)]sin{sigma[P(0),PI] + pi}

        = -s2(sigma)[P(0),PI]

 

Below is a summary of the most specific situations.

 

a)      order corresponds in the classical to s = 0 which, from (3) is reached only for P = 0 or P = 1. In the quantum, it corresponds to s(0)[P(0)] = 0 which, from (5) is reached for the same values of P(0);

b)      Systems with s = cte non zero are classically isentropic and (3) leads to P = cte. Systems with s(sigma) = cte non zero, i.e. s(0) = cte and sigma = cte are quantum isentropic. However, setting both (5) and (6) to constant values induces a functional relation between P(0) and PI. Now, these two components are independent. So, the final result is again P(0) = cte, together with PI = cte;

c)      All other entropic situations feature disordered systems;

d)      An anti-disorder is a disorder in phase opposition. As a disorder is measured with a positive entropy, an anti-disorder is to be measured with a negative entropy. But, it’s still a disorder!

 

Following these fundamentals:

 

a)      sigma[P(0),PI] = 0, substantial disorder, non-substantial order;

b)      sigma[P(0),PI] = pi/2, substantial order, non-substantial disorder;

c)      sigma[P(0),PI] = pi, substantial anti-disorder, non-substantial order;

d)      sigma[P(0),PI] = 3pi/2, substantial order, non-substantial anti-disorder.

 

 

Non-reversibility in the quantum

 

We now come to a central point of the general theory of disordered systems: non-reversibility.

 

Classical non-reversibility is expressed by Boltzmann’s “H” theorem which concludes by saying that the entropy of a classical system should always increase with time:

 

(12)           ds(t)/dt > 0

 

The quantum formulation of this “instantaneous variation” is Dt(tau)s(sigma)[t(tau)] and it’s obviously a quantum grandeur. As such, it carries “no sign or all of them at the same time”, so that inequalities like Dt(tau)s(sigma)[t(tau)] > 0 or < 0 would completely be meaningless: one can only compare classical numbers. The only relation that holds in the quantum is:

 

(13)           Dt(tau)s(sigma)[t(tau)] = 0

 

and it deals with isentropic quantum systems. When we look at quantum functions of a quantum variable, f(phi)[x(ksi)] = {f(0)[x(0),ksi],phi[x(0),ksi]}, we find no less than four variations in place of a single one for classical functions f(x) of a classical variable:

 

-         Dx(0)f(0) = ratio between the differential of the function amplitude and that of the variable amplitude;

-         Dksif(0) = ratio between the differential of the function amplitude and that of the variable quantum state;

-         Dx(0)phi = ratio between the differential of the function quantum state and that of the variable amplitude;

-         and Dksiphi = ratio between the differential of the function quantum state and that of the variable quantum state.

 

In mechanics, for instance, a quantum motion x(ksi)[t(tau)] through quantum space induces four velocities. It speaks better in planar:

 

-         Dt1(tau)x1(ksi) = v11(t1,t2), “corpuscular (instantaneous) velocity”;

-         Dt1(tau)x2(ksi) = v12(t1,t2) = instantaneous variation of the “wavy” motion or wavelength reported to that of the “corpuscular time”;

-         Dt2(tau)x1(ksi) = v21(t1,t2) = instantaneous variation of the “corpuscular” motion reported to that of the “wavy time” or period of a signal;

-         and Dt2(tau)x2(ksi) = v22(t1,t2) = instantaneous variation of the wavelength reported to that of period = group velocity;

 

So, we find these two notions of the velocity of a “solid”, v11, and the velocity of a “wavepacket” in a signal, v22, familiar to physicists, plus two “mixed” or “crossed” velocities, v12 and v21.

 

In the case of quantum entropy, the quantity that interests us is the total variation:

 

(14)           ds(0) = Dt(0)s(0)dt(0) + Dtaus(0)dtau

 

of the amplitude s(0)[t(0),tau] when t(0) varies a infinitesimal quantity dt(0) and dtau, an infinitesimal angle dtau. It’s legitimate again, here, to use the classical differential d, as we’re to deal with classical grandeurs only: t(0), tau and s(0)[t(0),tau]. There’ll be an increase in entropy (and therefore, in disorder) when ds(0) > 0 and this will happen for:

 

(15)           Dt(0)s(0)dt(0) > -Dtaus(0)dtau

 

This condition has nothing restrictive anymore, because both dt(0) and dtau can be either positive, null or negative. We have no other choice here but to be a bit repetitive, because we have no less than eight situations to examine. We will also assume that the derivatives are everywhere regular, so that both Dt(0)s(0) and Dtaus(0) are finite ratios. The ninth situation, dt(0) = 0 and dtau = 0 is of no interest here.

 

i) dt(0) > 0, dtau = 0 (i.e. tau = cte):

 

(16)           Dt(0)s(0) > 0

 

says the amplitude of entropy must increase with that of time. Same result as in the classical.

 

ii) dt(0) < 0, dtau = 0:

 

(17)           Dt(0)s(0) < 0

 

As the amplitude of time decreases, that of entropy must increase: increase of disorder as we go back in the past.

 

iii) dt(0) > 0, dtau > 0:

 

(18)           Dt(0)s(0) > -Dtaus(0)dtau/dt(0)

 

is the condition. As the ratio dtau/dt(0) > 0, if Dtaus(0) < 0, s(0) decreases (resp. increases) with decreasing (resp. increasing) tau and (18) then says that s(0) must meanwhile increase as t(0) increases or decrease as t(0) decreases, respecting the lower bound -Dtaus(0)dtau/dt(0). If Dtaus(0) > 0, Dt(0)s(0) is only required to remain higher than a negative value, allowing the case Dt(0)s(0) < 0.

 

iv) dt(0) < 0, dtau > 0:

 

(19)           Dt(0)s(0) < -Dtaus(0)dtau/dt(0)

 

same as above, but reversed.

 

v) dt(0) = 0, dtau > 0:

 

(20)           Dtaus(0) > 0

 

wow… at constant time amplitude, as we now have a quantum state, the condition only holds on the variation of s(0) with respect to tau: if tau increases (resp. decreases), so has to do s(0), no matter what the variation of t(0) is. So, if this holds, it will in the future as in the past.

 

vi) dt(0) > 0, dtau < 0:

 

(21)           Dt(0)s(0) > -Dtaus(0)dtau/dt(0)

 

hm… same as iii), except that, the sign of Dtaus(0) is reversed.

 

vii) dt(0) < 0, dtau < 0:

 

(22)           Dt(0)s(0) < -Dtaus(0)dtau/dt(0)

 

same as iv).

 

viii) dt(0) = 0, dtau < 0:

 

(23)           Dtaus(0) < 0

 

or v) reversed.

 

These are the eight conditions to fulfill for an increase of disorder in the quantum. For an increase of order, ds(0) < 0, would lead to eight symmetric conditions.

 

If you see anything really restricting in these conditions, whether on the setting of disorder or of order, thanks in advance for showing me, I would have missed it…

 

(can leave comments at the end of articles)

 

Well, folks, what we do not see being not forbidden, it leaves… space… (and a large one) to self-regenerative systems… :)

 

Indeed, for a system to regenerate by itself, it suffices that, after increasing its internal disorder, it’s able to reorder things. Apparently this does not contradict quantum thermo-dynamics at all. You notice in passing that an increase (resp. decrease) of disorder in the future doesn’t mean for as much an increase (resp. decrease) of order in the past: you can check, for instance, that between iii) and iv), there’s no time reversal in the entropic behavior, precisely because of the presence of Dtaus(0) that can take both signs in each case. As a result, reversing time while maintaining the same variation for its quantum state does not reverse the process for as much.

 

This may be the best indicator that self-regeneration can be taken for serious in the quantum. You can now find systems the degree of disorder of which will increase [ds(0) > 0] for a certain amount of time t(0), then decrease [ds(0) < 0] for another amount of time, while still pointing towards future!

 

Nobody talks here about identically rebuilding a body: self-regeneration has never been about this, such an idea is an “extrapolation” of it… :) In mechanics, a self-regenerative system is basically a system able to renew the amount of energy it lost: call H the total energy of a mechanical system; if the system dissipates part of its energy, dH/dt < 0 (the amount of energy decreases as time increases) and a regeneration would bring energy in (dH/dt > 0) so that the system retrieves its initial amount. This is the case for all the “living”, by the way, from the biological cell and the unicellular body up to evolved mammals: they spend energy and renew it feeding… They don’t need to travel back in time to get the energy they lost back… :))

 

Here, it’s the same, except that it’s now allowed even to inert substance

 

 

B 140: PROGRESSIONS/REGRESSIONS IN THE QUANTUM

Le 15/12/2017

There’s an extremely important and general feature of the quantum environment I’d like to discuss about in this bidouille and I’m really sorry I can’t make any drawing that would or wouldn’t be readable to everybody, because visualizing things would make it much easier and straighter to understand. It’s about progressions and regressions in the quantum world. There’s a general picture we all need to understand (me included) in order to be able to make meaningful reasonings.

 

Consider three classical values x-(0), x(0) and x+(0) such that 0 =< x-(0) < x(0) < x+(0). In a linear progression, they’re graphically represented as succeeding points along a straight line: x-(0) being the smallest value is the first point, then comes x(0) and finally x+(0). In a non-linear progression, these three points still succeed to one another (as it’s a progression), but stand on a curve instead of a straight line. If we now take x(0) as a reference point, then we usually consider that x-(0) comes “before” or “precede” x(0) and that x+(0) comes “after” or “succeed to” x(0). This is how we build “arrows” in physics: from a numerical ordering of values. When these points represent positions in space and x(0) is used as an observer’s location, then this observer considers x-(0) stands “behind” him/her and x+(0) “ahead” of him/her: our observer orientates distances around him/her, ordering space. When these points represent instants in time and x(0) is used as “present”, then x-(0) is assumed to be “in the past of x(0)” and x+(0) in its “future”: same ordering, same notion of an “orientation” that gives birth to an “arrow of time” or “succession of instants” or else, “time flow”. This is in the classical, where comparisons hold: “greater than or equal to, >=”, “smaller than or equal to, =<”, “strictly greater than, >” and “strictly smaller than, <”.

 

How about in the quantum?

 

In the quantum, a classical point x(0) becomes a quantum point x(ksi) = [x(0),ksi] in polar representation or [x1(ksi),x2(ksi)] in planar rep. This means that both x(0) and ksi or x1(ksi) and x2(ksi) are fixed in the classical plane. If we let ksi free of varying now, the continuous family of quantum points x(ksi) draws a circle in classical plane with radius x(0) neither x1(ksi) nor x2(ksi) can exceed anyway. Physically, again, it means the classical value x(0) is also the highest value both projections can reach.

 

If we now take two distinct quantum points x(ksi) = [x(0),ksi] and x+(ksi) = [x+(0),ksi] with same inclination ksi in the classical plane, these points will stand on a line inclined an angle ksi with respect to the “corpuscular” axis and, if we set x+(0) > x(0) >= 0, then x+(ksi) will come after x(ksi), as it will be further away from the origin (0,ksi). When this happens, we can say x+(ksi) succeeds to x(ksi). If we take a third quantum point x-(ksi) = [x-(0),ksi] such that 0 =< x-(0) < x(0), than x-(ksi) will stand on the same line and precede x(ksi).

 

What about if we take two quantum points x(ksi) = [x(0),ksi] and x+(ksi+) = [x+(0),ksi+] with still x+(0) > x(0) >= 0, but different inclinations ksi and ksi+? What can we say about x+(ksi+) regarding x(ksi)?

 

Distances to a given origin in the classical plane are determined by amplitudes: x(0) is the distance of x(ksi) to that origin and x+(0) is that of x+(ksi+), no matter their quantum states. Because, if you take ksi and ksi+ into account, you can find x1(ksi) smaller or larger than x1+(ksi+) and same for x2(ksi) compared to x2+(ksi+) as long as the equalities:

 

(1)               [x1(ksi)]² + [x2(ksi)]² = [x(0)]²

(2)               [x1+(ksi)]² + [x2+(ksi)]² = [x+(0)]²

 

and the inequality,

 

(3)               0 =< x(0) < x+(0)

 

are respected. So, it all comes back, in the end, to amplitudes and this raises an important question:

 

What is the more significant in the quantum, the notion of points or that of circles?

 

With the notion of points, we work on fixed amplitudes and fixed quantum states and we go from one fixed pair to another. Then, x(ksi) represents the fixed pair [x(0),ksi].

 

With the notion of circles, we work on fixed amplitudes, but free quantum states and we go from one circle with a fixed radius to another. Then, x(ksi) represents the circle (1) above.

 

If x(0) remains the only meaningful distance to an origin and ksi “only” serves as orienting a point with respect to a pair of axis intersecting through that origin, then it appears much better to use circles instead of points, because on a circle centered on the origin of a 2D frame, all points equally stand at a distance x(0) from that origin, so that all quantum states are included instead of a single one.

 

But this also means we have to move from a topology of points to that of circles, which are no longer sizeless geometrical objects, since their size is given by x(0)… Their size, not their length: their length is given by 2pix(0) and their area, by pi[x(0)]² (or, more correctly, the area of the disk delimited by that circle).

 

The size of a circle is given by the amplitude of the quantum grandeur related to it.

 

Using that new topology, we can extend our previous three classical values x-(0), x(0) and x+(0) into three circles x-(ksi-), x(ksi) and x+(ksi+) with totally independent quantum states. The former inequalities 0 =< x-(0) < x(0) < x+(0) will then enable us to assert that the quantum circle x-(ksi-) will be the smallest of all three, then will come x(ksi) and finally, x+(ksi+) will be the largest one. All three circles, once centered on the origin of the frame, will be circumvent: x-(ksi-) -> x(ksi) -> x+(ksi+), thanks to the classical ordering of their radii. The smallest circle possible is the circle with zero radius, which identifies with a point.

 

So, already, as amplitudes can never turn negative, you retrieve the fact that nothing can be smaller than a point, that no size can be negatively counted, so that you’ll have to modify your conception of “successions / preceedings”, as you can no longer take zero as a reference point, only as a universal reference:

 

As we can no longer translate the origin anywhere we want, as in the classical, the quantum zero is no longer a relative value, but becomes an “absolute” one (anew?).

 

There’s only one zero in the quantum and no negative sizes, nothing smaller than it.

 

Despite this apparently drastic reduction, we can still rebuild a notion of “successions” and “preceedings” in the following way.

 

Classical 1D space is a straight line, it goes from -oo (infinity) to +oo and its therefore unlimited at both ends.

 

Quantum space as a whole is also 1D and unlimited, but in size: its radius is infinite. Now, we need to understand this last dimension is also a quantum one. And, as we defined quantum grandeurs as pairs of classical ones, this quantum dimension corresponds to two classical ones. So, geometrically, the picture is not that of a classical plane, as generally assumed, but that of an unlimited circle: the whole quantum space is naturally cyclic and the circle is defined as a close line and thus, as a 1D object only. But an object still inscribed in a 2D classical plane, hence the correspondence between classical dimension 2 and quantum dimension 1: each quantum dimension “is equivalent to”, in idea, to two classical dimensions. So, when you go from the quantum down to the classical, you multiply dimensions by two and, when you extend the classical to the quantum, you divide dimensions by two.

 

However, the fact that the geometry of quantum space is cyclic completely changes the properties of classical space. In classical space, once you start from a point taken as the origin, as long as you progress, you can always go further and further away from your departure point, in space as in time. Whether you move deeper in space or in the future or deeper in space (opposite direction) or in the past. Mechanically, you can always go backwards, but you need make a U-turn. In quantum space, you go round. It means that, at least in principle, you will always be back to your point of departure after “going round the quantum universe”. If this has no particular consequence on space motion, but for avoiding you the U-turn, it does have on time motion, because it means that, still in principle, you can move into the “future”, than “go back to the present through the past”… which, set in those terms, doesn’t mean a lot… :|

 

This is because we still reasoned based on motions from points to points. At the best, it leads to the disappearance of the notion of “time arrow” and “space orientation”; at the worst, it leads to absurdities and paradoxes: how could you reach the present back always going deeper in the future, round the quantum universe?... Where’s the past in that picture?... :|

 

Instead, if we accept to reason with circles in place of points, we retrieve consistent reasonings, to the cost of more technical difficulties.

 

If our three circles x-(ksi-), x(ksi) and x+(ksi+) represent “quantum distances” and x(ksi) is taken as the “reference position”, then x-(ksi-) being smaller than x(ksi) will stand “behind” it, while x+(ksi+) being larger than x(ksi) will stand “ahead” of it, no matter the positions occupied on these circles. Instead, we now have this two-way correspondence:

 

0 =< x-(0) < x(0) < x+(0)  <=>  “x-(ksi-) smaller than x(ksi), itself smaller than x+(ksi+)”

 

We could apply this to a “quantum time” as well. However, B141-theorem shows that this is not even necessary, as it suffices to pair a space variable with a time one to replace any space-time with a continuous family of spaces-only. See the example. It easily generalizes to any space-time with p space variables and q time ones, with p > q: we make q pairings to eliminate the q time variables, this gives q angles serving as as many continuous parameters in a family of (p+q)-dimensional spaces-only…

 

If, now, for purely conceptual reasons, we’d prefer to keep a notion of time in the quantum, same two-way correspondence as above, now specifically writing:

 

0 =< t-(0) < t(0) < t+(0)  <=>  “t-(tau-) in the quantum past of the quantum present t(tau), itself

in the quantum past of the quantum future t+(tau+)”

 

From what was previously seen, you can now see that:

 

The “furthest quantum past” is zero, the universal time origin, and all other quantum moment is located in its quantum future.

 

We now retrieve the notion of a “universal space origin” and that of a “universal time origin”. None of them are subject to relativization. So good, actually, because it replaces relativism within the observer’s choice and not as an inherent physical property of the world…

 

Take all observers away from the frame, what remains is a comparison of sizes, not orientations, not “arrows”…

 

 

An interesting remark now.

 

It is a very general feature common to both the classical and the quantum that the dynamical notion of a motion can be equivalently represented whether as bodies moving into a fixed frame or as bodies occupying fixed positions in a moving frame.

 

And this holds for space as for time. I can assume I’m moving inside fixed 3D space and even fixed time, both fixed “once and for all”, i.e. from the birth of the universe, or I can equivalently assume that I make no motion on my own but, instead, both space and time move around me, carrying bodies with them. The result will be the same, but the geometrical picture will be completely different.

 

In fixed frames, you move from one location to another whether a continuous or a discontinuous way (“jumps”) and from one instant to another: x-(0) -> x(0) -> x+(0) or t-(0) -> t(0) -> t+(0). It’s a succession of locations and moments and you explore them the one after the other. In this picture, the frame is passive and you are the actor. This is generally what we assume as “motions”.

 

In moving frames, you stand still, you’re steady, and the frame inflates [x-(0) -> x(0) -> x+(0)] or deflates [x+(0) -> x(0) -> x-(0)]. You can also occupy a steady position [x(0) -> x(0)], whether permanently or for a certain duration only. This second picture is now about scalings and rescalings of both space and time. Starting from (idealistically) 0, a first inflation brings space to a size x-(0) > 0. As you stand on a circle with radius x-(0), you’ll be brought from the initial position zero to the position x-(0). The angle ksi (the quantum state space is in - notice: space, not you) can then “affine” that positioning of you (this time), despite you made no action to displace from one location to another: you’re now the one to remain passive. If it deflates of the same amount, you’ll be back to zero. If it inflates again, you’ll be brought to a new location x(0) > x-(0). And so on.

 

The same holds with time. It’s even a much better representation of “time flow” than trying to represent it as a “passive time” the traveler would “visit” one instant after the other. In a moving time, the traveler visits nothing, he/she just let him/her being “carried”, “transported” by time inflations and deflations, changes of scales. Time flows… :)

 

Finally, in a moving quantum frame, in addition to that common (re)scaling of amplitudes (radii), we find a rotation of the frame in the quantum states. Indeed, if you’re found in a “quantum position” [x(0),ksi] in numerical values and, later on, in a position [x(0),ksi’], with ksi’ different from ksi, space around you didn’t move but it rotated an angle ksi’ - ksi. As a result, you still moved, even if you didn’t actually go further away.

 

Regarding, now, a classical system of axis, one “corpuscular” (“horizontal”) and one “wavy” (“vertical”), a rotation of space of pi/2 permutes these axis: the corpuscular becomes wavy and the wavy, corpuscular. A rotation of pi will turn these axis upside-down: the “positive” becomes the “negative” and the “negative”, the “positive”, but the corpuscular remains corpuscular and the way remains wavy.

 

So, as you can see, this is all purely conventional, in the end… :) In the quantum, there simply can’t be anything like “the substantial”, the “non-substantial”, the “positive” or the “negative”. It simply has no sense… (if I may use that metaphor…)

 

If you combine both, inflation/deflation with rotation, you obtain a motion known in mathematics as a similitude. You’ll then give an observer reasoning in a fixed frame the perception that you went from point [x(0),ksi] to another point [x’(0),ksi’]. In reality, you didn’t move, the frame did it for you. But this is ultimately irrelevant, because the result is the same… :)

 

Or is it really? Because, now, the geometrical picture is radically different.

 

In a fixed frame, a quantum motion x(ksi)[t(tau)] = {x(0)[t(0),tau],ksi[t(0),tau]} is graphically represented as a Riemann surface. In other words, the quantum curve x(ksi)[t(tau)] corresponds to a pair of classical surfaces, making that “Riemann surface”. To each classical instant t(0) and each quantum state tau of quantum time corresponds a location x(0)[t(0),tau] with quantum state ksi[t(0),tau] on that surface. The picture in itself is interesting, giving a time surface developing through some 2n-dimensional space, where n is the number of classical dimensions, but is it the truly quantum picture? No: the truly quantum one is represented as x(ksi)[t(tau)], it’s a quantum time curve developing in n-dimensional quantum space, where n is now the number of quantum dimensions…

 

In a moving frame, x(ksi)[t(tau)] represents instead a “space circle function of a time one”. Both families of circles are centered at the same “universal” origin (0,0), the only point in quantum space. What happens now is: as the time radius t(0) of t(tau) increases, there’s a corresponding space radius x(0). The set of all radii t(0) between t(0) = 0 and some final instant t(0) = T(0) that can be pushed to infinity as well, this set makes a dense family of concentric time circles all centered at (0,0). So does the set of all corresponding radii x(0) between the initial and final instants. Consequently, if we draw a system of two perpendicular planes on a sheet of paper, one horizontal and one vertical, on the horizontal plane, you’ll see growing concentric time circles and, on the vertical plane, corresponding concentric space circles. There’s no “motion” whatsoever in the space surrounded by these two planes. The “motion”, i.e. the succession of expansions and contractions, entirely occurs in the vertical space plane.

 

Let’s take an example to illustrate this geometrical difference and close this article.

 

Consider the quantum parabolic motion x(ksi)[t(tau)] = ½ [t(tau)]².

 

In a fixed frame, it corresponds to the Riemann surface {x(0) = ½ [t(0)]² , ksi = 2tau}. Quantum objects submitted to this law will move on that surface, starting at (conventional!) t(0) = 0, at {x(0) = 0, ksi = 2tau}. So, first, even at the departure point, you still need to precise the initial quantum state tau in order to be able to deduce ksi. Then, for each value of tau, your body will trace a parabola in (classical!) space: x(0) = ½ [t(0)]². So, even for a portion only of angles, say 0 =< tau =< TAU =< 2pi, you’ll get a dense sheaf of parabolas, one for each value of tau. And, as tau is periodic, when it reaches pi radians, ksi reaches 2pi radians and we’re back to the parabola at ksi = 0. Here’s the geometrical picture you get of this particular motion when assuming that bodies move in a fixed frame.

 

In a moving frame, you find concentric time circles, all centered at t(0) = 0, and concentric space circles, all centered at x(0) = 0. As time radii grow, so do space ones, following a square law. You don’t need to worry anymore trying to know at each step what the quantum states of time and space are. If tau is given to you, then you can add to your knowledge of the quantum motion that the position on the corresponding space circle will be twice that value. But it’s no longer necessary to understand the motion. All we need to know is that, in a time inflation, we have a quadratic (or parabolic) space inflation and, in a time deflation, a parabolic space deflation, so that space inflates or deflates much quicker than time.

 

N.B.1: in example given, physical units have to be restored.

N.B.1: in the case of a much more general motion like

 

x(ksi)[t(tau)] - x0(ksi0) = ½ a(alpha)[t(tau)]² + b(beta)t(tau) + c(khi)

 

space and time origins can always be translated back to (0,0) using shifts

 

t’(tau’) = t(tau) - t0(tau0)

x’(ksi’) = x(ksi)[t’(tau’)] - x0(ksi0)

 

where t0(tau0) is given by the timeless coefficients a(alpha), b(beta) and c(khi). So it changes nothing to our purpose.

 

 

Minibluff the card game

Hotels