blogs.fr: Blog multimédia 100% facile et gratuit

doclabidouille

Blog multimédia 100% facile et gratuit

 

BLOGS

Blog dans la catégorie :
Sciences

 

Statistiques

 




Signaler un contenu illicite

 

doclabidouille

B121: NC DYNAMICS - 2

Le 11/08/2015

Before going further with dynamics, we first need a bit more geometry. To keep the link with physics, we’re going to work in dimension d = 4. All the results established hereunder will still hold in dimension d.

We begin with noticing that, if xi (i = 1,2,3,4) is a point of M4 (endowed with the structure of an affine space-time), then xij = (xi)j is a point of M16. Indeed, we can always identify M4 with, say, the j = 4 component M44 of M16 = M41xM42xM43xM44. The four M4j have same dimension 3+1. As M16 can be rendered isomorphic to M4(R) (up to certain restrictions we will see examples of below), the set (xij)i,j=1,2,3,4 will represent a point of M4(R), endowed with an affine structure (as a 16D vector space-time). Take now xi = (0,0,0,x4): it’s equivalent to a (here time-like) scalar on M4. To this scalar will then correspond the 4-vector (x4)j = x4j of M16. Now, we have a two-way correspondence between M16 and M4(R). We therefore deduce that:

 

A NON-COMMUTATIVE SCALAR ON M4(R) IS FORMALLY EQUIVALENT TO A COMMUTATIVE 4-VECTOR (WITH REAL-VALUED COORDINATES).

 

The physical consequence of this is important. It means that there is no “scalar” in the non-commutative sense, as there is in the commutative one: all commutative scalar (C-scalar, in short) becomes one of the four component of a still commutative 4-vector, which sends then back to the appropriate non-commutative scalar (NC-scalar).

There also a purely algebraic reason to this. Take two non-zero real numbers a and b: these are real-valued C-scalars. The two products ab and ba are again non-zero real-valued numbers, so that they belong to the same set as their components. Moreover, whatever the values of a and b, you’ll always have ab = ba: the set of real numbers is commutative and so are the vector and affine spaces related to it.

Take now two non-identically-zero real-valued NC-scalars xi and yj. Each of them is an element of R4. Since i and j are strictly positive integers, xi and yj are also functions of these indices: xi = x(i) and yj = y(j). So, we can also see them as two C-scalar functions of the indices. We will obviously keep on having xiyj = x(i)y(j) = yjxi = y(j)x(i), but we won’t have xiyj = xjyi unless x = y, i.e. the two C-scalar function are identical.

As both xi and yj are NC-scalars, this property should justify the name “non commutative”: the product of two NC-scalars makes an asymmetric matrices. Whereas none of xiyj and xjyi belongs to R4 (as matrices), xi, yj and their products are all in M4(R).

 

Step 2. Take xi a NC-scalar. The 4-plet (xi)j = xij becomes a NC-4-vector on M4(R): xij is the i-th coordinate of a point on Mj.

 

A NC-VECTOR ON M4(R) IS FORMALLY EQUIVALENT TO A C-TENSOR OF ORDER 2 WITH REAL-VALUED COEFFICIENTS.

IT’S ALSO EQUIVALENT TO A REAL-VALUED C-VECTOR IN DIMENSION d² = 16.

 

Generalization is easy and we quickly get the following result:

 

ANY NC-TENSOR OF ORDER n ON M4(R) IS FORMALLY EQUIVALENT TO A C-TENSOR OF ORDER 2n OR TO A C-TENSOR OF ORDER n IN DIMENSION 16.

 

We should however be careful with going over the correspondence between M16 and M4(R) and trying to identify objects of both space-times. This is a typical example.

Let (D) be a C-line on M4. (D) has (inhomogeneous) equation aixi + b = 0, with ai and b real-valued coefficients. It’s an object of C-dimension 1.

Let now (D’) be a C-line on M16. Choosing new major indices I running from 1 to 16, the equation for (D’) can write aIxI + b = 0. Then, identifying I with the pair of indices (ij), we would find aijxij + b = 0.

 

This is not the equation for a NC-line on M4(R).

 

The correct equation for a NC-line (D”) on M4(R) is:

 

(1)               aijkxkj + bi = 0

 

Reason n°1: b is no NC-scalar; n°2: the matrix product of a and x is aijkxkj, not aijkxjk. As said above, xij = (xi)j is the i-th coordinate of a point on Mj, whereas xji = (xj)i is the j-th coordinate of the same point, but on Mi. So, there’s absolutely no reason why we should have xij = xji when i ¹ j. Commutatively, (1) don’t even give a 4-plet of C-lines on M16, since this 4-plet would write aiJxJ + bi = aijkxjk + bi = 0.

 

A NC-LINE (D”) ON M4(R) HAS C-EQUIVALENTS IFF aijk = aikj IN (1). IN THIS CASE, (D”) CAN BE MADE EQUIVALENT TO A SET OF 4 C-LINES IN M16.

 

And still: 1) I wouldn’t try, for properties of M16 and of M4(R) are completely different and 2) it’s useless… It’s useless, because we don’t have a 16D space-time, but a 4D-space-time in 4 states: each M4i (i = 1,2,3,4) is a state of M4.

 

To compute distances or lengths, areas, volumes, etc. we need differential forms. Before talking about them, we first consider the coordinate matrix X = (xij)i,j=1,2,3,4. It’s a coordinate system on M4(R). Since it has C-dimension 16, it has NC-dimension 4. The set of real numbers R being a field of characteristic zero, the algebra M4(R) is naturally endowed with the structure of a real-valued vector space and can then be associated an affine space. This means that X represents a point on M4(R):

 

ANY POINT ON M4(R) HAS 16 C-COORDINATES AND 4 NC-COORDINATES.

 

We’ll continue tomorrow.

 

 

B120: NON-COMMUTATIVE DYNAMICS - 1

Le 08/08/2015

Excellent. All ingredients I’m gonna need today are contained in B114 and the end of B116, where I wrote: “it’s light-years away from our practical considerations”. Well, I could have been a bit too pessimistic…

There’s a connexion between what I’ve done in B114 and a much older work, dating 2007, on the unification of fundamental interactions, that might have unsuspected impacts on our purpose.

Non-physicists should understand that most of the arguments used at our scales must find justifications at much lower scales. Thus, all so-called “phenomenological theories” of physics find their justifications in quantum theory and the microscopic. So, keeping the foundation problems apart, investigating the structures of the quantum is actually everything but a waste of time and reveals mechanisms that will lead to macroscopic behaviours.

Now, back to this 2007 work. It’s was about building a unified model of the 4 known gauge interactions in a 16D space-time, with 12 space-like dimensions and 4 time-like ones. The guide, at this time, was the Kaluza model (without the Klein hypothesis). In this context, i showed how a system of 16 local coordinates could give the correct number of gauge potentials back and how they could couple to each other.

The connexion with B116 appears when we take SU(3,1) as isospin group. As I said, the presence of a 4-state particule sends back to tensor coordinates xij in place of the vector xis. As indices run from 1 to 4, there are indeed 16 coordinates.

There are two ways of building such a 16D space-time: whereas as the tensor product M4+ÄM4- of two 4D Minkowski space-times, or as the Euclidian product M41xM42M43xM44 of four 4D Minkowski space-times. In physics, the tensor product generally stands for non interacting space-times, whereas the Euclidian product indicates interacting (coupled) space-times. So, still from the physical point of view, it’s mathematically equivalent to consider a 2-state space-time with independent components or a 4-state space-time with coupled components. If we take xi+ and xi- for local coordinates on M4+ and M4-, respectively, we should expect their product xi+xj- to stand for a local coordinate system on M4+ÄM4-. Now, this product is in m², not in meters. So, if we do that, we have a problem of physical units. To preserve them, we have no other solution than to introduce tensor coordinates xij (in meters) such that:

 

(1)               xi+xj- = xikxkj

 

This gives a first indication of a 4D non-commutative space-time, as (1) can easily extend to squared distances xikxkj that no longer decompose into a product (“irreducible tensor coordinate systems). In terms of space-times, we now consider space-times that are still 4D, but in a “non-commutative” sense and that do not necessarily reduce to the tensor product of two commutative 4D space-times. Such space-times are vector spaces on the algebra M3,1(R) of pseudo-Euclidian 4x4 matrices with real coefficients. M3,1(R) is finite-dimensional, of dimension (3+1)² = 16. It’s precisely isomorphic to this 16D space-time above.

All these isomorphisms actuallt enable us to introduce the notion of non-commutative dimension without ambiguity. We are used to take for the dimension of a space a scalar quantity d. This is a commutative definition of the dimension. So, when we say that a non-commutative space such as M3,1(R) “has dimension 16”, we actually make a correspondence between this space and a commutative space with the same number of dimensions (and therefore, isomorphic to it). A space with local coordinates yI (I = 1,…,16). In mathematics, this can show interesting. In physics, it’s not, unless we find any justification to these 12 additional dimensions.

To avoid this difficulty, I prefer to introduce the notion of non-commutative dimension. I also think it’s better appropriate to the non-commutativity of M3,1(R). If we now say that M3,1(R) “has non-commutative dimension 3+1”, we then deduce that:

 

1 NON-COMMUTATIVE DIMENSION º 4 COMMUTATIVE DIMENSIONS

 

(º: formally equivalent to). Indeed, we have xij = (xi)j, so that, going from xi to xij amounts to going from a scalar quantity to a vector one. This also holds for the dimension, which becomes a vector quantity. Generally, a vector dimension d = (d1,…,dn) is associated to a set of n vector spaces Vi, each of (commutative) dimension di. For n = 4, we recover the set of 4 Minkowski space-times M4i of the Euclidian product above. There, d1 = d2 = d3 = d4 = 3+1 and we can replace our 16D unified space-time with a non-commutative space-time M3,1(R) with a non-commutative ( = vector) dimension 3+1.

Physically, this has an important consequence, since a non-commutative line becomes formally equivalent to a commutative 4-volume:

 

NON-COMMUTATIVE LINE º COMMUTATIVE SPACE-TIME VOLUME

 

In comparison, superstring theory proposes to replace a commutative line with a (still) commutative surface…

 

See? The physical justification of such frames that can be used at all scales lays, as usual, in relativistic quantum field theory. Even if this theory should hold only at the microscopic level, it introduces new notions that enable us to define new quantities and extend the commutative properties of macroscopic physics… J

 

We can now turn back to the classical dynamics of macroscopic bodies. A trajectory in non-commutative Euclidian space M3(R) is a function xij(tk). Indices still run from 1 to 4.

Shall I again justify this?

We use a Yang-Mills boson interacting model with gauge group SU(3,1). The particle current is a 3-tensor pijk = ½ iħ(ykiyj* - y*kiyj). It’s a density. It corresponds to it an energy-momentum tensor Pijk = pijk/yly*l = mvijk, since all states represent the same particle of mass at rest m. Therefore, vijk is the velocity tensor in the classical sense, with vijk = xij/tk. We have x0j = ctj, so that v0jk = cdjk.

Consequently, xij(tk) is no longer a commutative curve (a single time parameter), but a tensor field over 4D Euclidian time hypervolume. So, it’s a non-commutative curve, developing in non-commutative time.

It’s easier to use the traditional definition of the surface element. Whatever the gauge group, the Lagrangian density of a quantum interacting system remains a scalar quantity and so does the surface element ds². For SU(3,1), we thus have:

 

(2)               ds² = dxijdxji = c²dtjdtjdxjdxj = c²dtjdtj(1 – vjkvjk/c²)

 

with each of the dxj a 3-vector (j = 1,2,3,4) and vjk = dxj/dtk, the corresponding velocity 3-vector. This makes vjk a matrix (or a tensor, whatever) of 3-vectors. dtjdtj is the Euclidian square of the 4-component time vector dtj. vjkvjk is the trace (sum of diagonal terms) of the square of the matrix vjk. The metric used to upper and lower indices is the usual Minkowski metrical tensor gij. (2) is a matrix generalization of ds² = c²dt² - dx.dx in commutative M4.

It follows that the free Galilean motion of a rigid body of mass m can be described through the Lagrange function:

 

(3)               Lkin = ½ mvjkvjk

 

It should not be difficult to make yourself sure that the Lagrange equations of motion are:

 

(4)               (/tk)L/vjk = L/xj

 

Applied to (3), this leads to:

 

(5)               pjk/tk = 0  ,  pjk = mvjk = mdxj/dtk

 

For m = cte, we get ²xj/tktk = 0, general solution:

 

(6)               xj(tk) = Kj/tktk + ½ ajkltktl + bjktk + cj  ,  ajkk = Tr(aj) = 0  (j = 1,2,3,4)

 

where Kj, ajkl, bjk and cj are constants with the aj traceless. As the square tktk is Euclidian, Kj/tktk is the Newtonian behaviour in Euclidian dimension 4. The three other terms describe confinement.

Besides, a remark about this. Confinement does not seem a specific feature of QCD: we find it since Maxwell. What we did is that we neglected the polynomial terms in the solution for Maxwell fields: we only kept the converging kernel. It rather seems to be a general feature of field theory and it explains why we should find confinement in the weak model as well.

Anyway, we can see in (6) that free motion is no longer uniform, as it is with a single time parameter: the Newtonian contribution makes it tend towards zero at time infinity, whereas the confinement term makes it diverge. In the physical reality, there are necessary time values for which an equilibrium between these two antagonist contributions occurs. A compromise is found. The velocity matrix is:

 

(7)               vjk(tl) = -2Kjtk/(tltl)² + ajkltl + bjk

 

It’s clear there are time values when this matrix vanishes, before inverting, meaning that, even free, the motion cannot go over a certain distance Xj, corresponding to vjk(tl) = 0. Acceleration is:

 

(8)               ajkl(tm) = -2Kj(gkltmtm - 4tktl)/(tmtm)3 + ajkl

 

But, wait a minute. The situation is a bit more complicated than in the commutative case. Take bjk = 0 for simplicity. Then vjk(tl) = 0 happens when:

 

(9)               ajkl = 2Kjdkl/(tmtm

 

And what if ajkl is not reversible?... Then, (9) has no solution, meaning the free motion never stops. Perpetual motion until something comes to slow the body down.

 

We’ll see the forced motion next time.

 

 

B119: SENT BACK TO (BAD) OLD QUANTUM MEASUREMENT PROBLEM...

Le 25/07/2015

We now have what can be called without exageration a HUGE worry, the kind that cannot be solved in 48h of time, as we are touching the most delicate (and still controversed) point of the foundations of quantum mechanics: the definition of the wavefunction itself, through the observability problem.

I read Schrödinger and Feynman once again. It’s an excellent exercise to go back to sources when you’re stuck. The fundamental property of measurement at microscopic scales is rather easy to enounce: if you voluntary restrict yourself to the mere observation of the final impacts of particles on a screen, what you get there is an interference curve, with fringes; but, as soon as you want to make your understanding of the process finer, trying to determine the paths the particles have taken from their emitting source to the screen, the fringes disappear.

In other words, as long as you don’t observe particles, only the final result, these particles behave like waves; as soon as you observe their motion, they behave like corpuscles.

Most of theoreticians deduced from those Young’s experiments that observation or, what amounts to the same, the presence of the observer, suffices to destroy interferences, deeply modifying the behaviour of particles. Or, which is equivalent, that the observer did himself interfere with the system in such a way that he destroys the initial interferences. In what way? Nobody knows. We only talk of a “collapse” of the wavefunction, but we’re still unable to give a consistent mechanism behind this.

Many theoreticians of Schrödinger’s time did not share his opinion about the wavefunction being able to potentially represent any physical object in Nature, whatever their size. Some, like Bohr, Heisenberg or Born, despite amongst the founders of quantum theory, were convinced it was nothing else but a convenient mathematical (i.e. abstract) tool to calculate probabilities, without any deeper physical content. No “physical reality”. For these people, and many others after them, measurement was the only meaningful process and the values obtained in final results, the only “touchable” reality.

As for myself, I’m not fully convinced observation is the true problem: in all cases, we observe the impacts on the screen. So, we can as well observe the interference fringes (or we wouldn’t talk about them) or the “smoothed classical” curve. The “sudden reduction” arises when we add a complementary observation “inside the box”. When we try to know which path a given particle of the beam could well take to reach the screen.

Put differently, as long as we stick to the final result, we don’t modify the essential nature of the object we’re experimenting in any way; if we are more curious, things immediately reduce and we loose all the informations about the physical reality of this object.

“Classical” physics asserted that the physical reality of substantial objects was strictly corpuscular: any substance was made of “corpuscles”. Waves had nothing substantial, they rather were processes between substances.

The rising of so-called “wave mechanics” deeply transformed this vision of Nature. Young’s experiments showed without ambiguity that, at least at microscopic levels, neither substantial matter nor even radiations could be said to be “corpuscles” or “waves”. On the contrary, it revealed that their true physical reality was none of them: we can no longer talk of the electron as a “corpuscle of matter” if it starts to behave like a wave as soon as we don’t disturb its motion (direct observation); we can no longer talk of the photon as an “elementary electromagnetic wave (or radiation)” if it starts to behave like a corpuscle as soon as we observe it directly.

For Schrödinger and many others, the physical nature of microscopic objects now depended on what the observer did or didn’t do.

I cannot criticize this approach, as they tried to interpret as faithfully as possible what went straight against all our conceptions on objects and processes so far.

But the least we can say is that it has absolutely nothing “universal”…

I just cannot satisfy myself of a physics “depending on the observer’s will”. I’m not partisan of “hidden variables” either: violations of Bell’s inequalities have been clearly established.

On another side, despite he let me the feeling his mind about it had evolved from his fundamental work in 1926 to the 1950s, Schrödinger seemed to be deeply convinced that his concept of a “wavefunction” did not apply to the microscopic only, but to all scales. He was convinced that it did have a physical reality. But, at the same time, he was perfectly conscious from the start that it was highly unstable. So unstable, actually, that the smallest disturbance or the first measurement on it, not only modified it, but destroyed it! In his lectures, he clearly says the wavefunction no longer exist after a measurement: it simply disappears and is replaced with a new one, just after the measurement. He insists on the fact that it’s not a question of time evolution, that time has nothing to do with that and the only role time can play there is to only make the situation worse in the future!

Schrödinger was strongly influenced by the works on statistical physics in the second half of the 19th century, and especially by Gibbs. His fundamental equation of wave mechanics shows it: it has the structure of a scattering equation with a complex scattering coefficient. Many attempts, including mine, have been made to formally derive this equation. None of them are fully convincing so far. Contemporary statistical physics managed to explain why the transition from “classical” to “quantum” had to go through exponentiation, basing its argumentation on the “partition function” of “classical” statistical physics. But it still remains to explain where the amplitude of the quantum signal can well derive from…

We only say it’s a “probability amplitude” because experimental results show it: it’s only heuristic. We still haven’t got any physical mechanism behind, to complete the transition on the phase.

Anyway, the whole present construction sounds everything but consistent. Take the problem of the previous bidouille. Whatever the objects now, even particles, we start with interference. This undermeans we do not directly observe the system. We only observe the results it gives. We now observe it directly. The interference term should vanish. However, we now make a 3-body system: the two first bodies + the observer, right? They are all assumed to interfere, according to the principles of wave mechanics and quantum measurement theory. So, this gives us 3 wavefunctions (one more, the observer’s). That’s one more amplitude a3, on more phase q3, to combine with a12 and q12. We find similar formulas for the resulting amplitude and phase. Now, this should give in the end a1² + a2², since interference is destroyed.

What should honestly justify both a3 and q3 so take such “suitable” values, as long as we observe the system, that the result is the vanishing of interference between y1 and y2?...

Okay. Forget about a possible observer’s wavefunction acting. Then the measurement process should be such that, all along it, the phase shift q1 - q2 should become equal to an odd multiple of p/2, namely, (2n+1)p/2, n Î Z, everywhere inside the system. Again, why? How observation could modify the dynamics of the phase shift? In what way?

You won’t have lost sight that the wavefunction was defined as a probabilistic distribution in Euclidian space along time: y = y(x,t). So, it remains a signal under its conventional form. This was justified by the fact that all “useful” observations happen in ordinary space and evolve in ordinary time: what might or might not happen outside this frame is normally unreachable to the observer, and therefore considered “useless” or “meaningless”.

Quantum mechanics was made as a, if not the, physical theory of measurement and observation. The only quantities that matter are called “observables”. And when people extended its principles to space-time relativity, they agreed in saying that the Galilean concept of Schrödinger’s wavefunction couldn’t hold anymore as such and has to be modified to satisfy the transformation properties of the larger Lorentz rotation group and, overall, the finiteness of the speed of light: the “wavefunction” thus became more a “statefunction” or some “field operator” acting on population states (or equivalently, energy levels).

Somewhat ironically, Galilean wave mechanics derived from Planck’s work on oscillators and couldn’t reach the goal of properly describing a system of oscillators confined into a box when time-relativistic effects become non-negligible…

Besides, Schrödinger stayed reluctant to believe in a possible “collapse” of his wavefunction into discrete values (“quantum jumps”), despite he defended the results obtained by Planck…

To end this discussion, let us recall that Prigogine, long after Schrödinger, based his own arguments on deterministic chaos and the possible transition from “classical” to “quantum” (and back) through chaos to give much finer explanations on the structural instability of the wavefunction as an essentially local dynamical object.

We now represent y(x,t) not as a “wavefunction” or a “state function”, but rather as a trajectory in some “wave space”. However, we translated the difficulty in determining if this “wave space” has any physical reality or is merely one more mathematical tool.

 

Since De Broglie suggested to associate a wave to any corpuscle, I now wonder why Schrödinger did not try to change frame and apply the principles of statistical physics to the new one, may the final results be found in conventional 3-space. Just to see. Instead of that, he nearly applied “bluntly” those principles to quantum waves, while staying in E3.

Let’s instead consider a wave space. This is a functional space over E3 and the real line R (for time). A “local coordinate” on this wave space is a pair [y(x,t), y*(x,t)] since quantum waves hav to be complex-valued (as Feynman pointed out in his lessons, opposite to the situation in classical physics, real-valued waves are not sufficient in quantum mechanics, we also need their imaginary parts – just as for the refraction index, we need its imaginary component to calculate the reflexion part). Consequently, any physical field f(x,t) on E3xR will leave place to a “superfield” F[y(x,t), y*(x,t)] on the wave space. Such a “superfield” (which has nothing to do with supersymmetry, by the way) is clearly a functional over E3xR. Physically, this represents the transition between “corpuscular” (or “point-like”) to “wavy”.

There’s no apparent objection in applying the principles of statistical physics to waves y(x,t). We just have to be careful of the dynamics involved: statistical physics was built for substantial media, made of corpuscles. Waves do not collide, they interfere. Precisely. So, instead of, say a gas, made of N corpuscles randomly colliding, we rather have a non substantial medium made of N waves randomly interfering. These waves do not need to be “wavefunctions” or “wavepackets”: as points x of 3-space are elementary, the waves y serving as coordinates in the wave space should rather be taken as elementary as possible, i.e. as monochromatic plane waves. Thus, any “function” F[y(x,t), y*(x,t)] of these basic waves will be able to give more complex waves, such as polychromatic ones, wavepackets (compact waves), etc, according to the shape of F.

A system made of N free corpuscles in ordinary 3-space had 3N degrees of freedom, a system made of N free waves in wave space will have 2N degrees of freedom in this space (but obviously 2N infinities in E3xR, indicating we’re now dealing with continuous and no more discrete objects).

We can even say more: we can say that (y,y*) is the location of some discrete object in wave space (the equivalent of the corpuscle in E3), while [y(x,t), y*(x,t)] represents a corresponding continuous object in E3xR.

Changing frame, leaving E3 for a frame better adapted to waves in E3, we have “discretized” waves without doing any special physical process. Each wave there can now be viewed as an isolated entity, whereas it was seen as a continuous process in E3.

 

Can we solve this way the measurement problem and the “collapse” of the “wavefunction”?

Let us rename our local coordinates in wave space (f,f*). A “wavefunction” or “probabilistic wavepacket” in E3xR can be built in wave space as some combination y[f(x,t), f*(x,t)]. If that combination is linear, we have a superposition of monochromatic plane waves. We can even build it as y[f(x,t), f*(x,t),x,t]: such last relations are local on E3xR. But let us first restrict to global ones. y[f(x,t), f*(x,t)] is the wavefunction of a system we don’t directly observe. As soon as we will, y will “degenerate”. What’s interesting with (f,f*) is that they always correspond to perfectly determined states with finite energy and momentum in E3xR. Should our measurement give us such a state, then y[f(x,t), f*(x,t)] should reduce to the corresponding “wavy coordinate” [f(x,t), f*(x,t)]. And this corresponds to y = d, the Dirac distribution. More precisely, we should have:

 

y[f(x,t), f*(x,t)] = d[f(x,t) - f0(x,t), f*(x,t) - f0*(x,t)]

 

where [f0(x,t), f0*(x,t)] is what we obtain.

We better see what may happen with local relations. Assume the result of the measurement occurs at t = 0. Then, at all t < 0, y[f(x,t), f*(x,t),x,t] is some physical state we don’t observe anyway. At t = 0, this physical state is reduced into d[f(x,0) - f0(x,0), f*(x,0) - f0*(x,0),x,0] = y[f(x,0), f*(x,0),x,0] and it remains like this until a new measurement is done.

Well, I don’t know if this is a possible explanation or even solution, but what I can see for the time being is that we no longer have any discontinuity in the measurement process: in Schrödinger’s (and al) interpretation (in E3xR), the discontinuity was on wavefunctions [y(x,t), y*(x,t)]. In wave space, we have no discontinuity on [f(x,t), f*(x,t)] at all and y[f(x,0), f*(x,0),x,0] has no reason to show discontinuities, unless it has some very special behaviour in E3xR. The transition is rather continuous. As f(x,t) is of the form aexp[±i(k.x - wt)] with constant amplitude a (the most basic waves!), f(x,0) is perfectly regular.

That’s what I see for the time being: still better than nothing…

 

As for interferences, I’ll check for the next time.

 

 

B118: WHEN TRYING TO APPLY WAVE INTERFERENCES TO BIOLOGY LEADS TO NEW SURPRISES...

Le 22/07/2015

I don’t know if the drawing (3) in the last bidouille is visible, it’s not on this computer. Yet, it has been correctly converted, as I could check last time. Should anyone has visualization pb, thanks to let me know, I’ll ask the webmaster for further info.

Searching for some « nice » properties of wavepackets that could show possible applications to biology, i did find interesting results… but i also raised one more rabbit…

Just can’t believe this…

Okay. Let’s review some general properties first.

According to one of the basic postulate of quantum mechanics (it’s still a postulate…), the corpuscular energy of a given physical system equals the energy of its corresponding wavepacket:

 

(1)               mc²/[1 – v²(t)/c²]1/2 = hf(t)

 

where m is the mass of the corpuscle at rest and f(t) the frequency of its wavepacket. This equality extends to the momentum so that:

 

(2)               mvi(t)/[1 – v²(t)/c²]1/2 = ħki(t)  ,  vi(t) = [c , v(t)]

 

It will be enough to consider (1). The frequency of the signal in the reference frame at rest (proper frame) of the particle is:

 

(3)               f0 = mc²/h

 

Unless m varies in time, f0 is a constant. Still more generally, if q(x,t) is the phase angle of the wavepacket y(x,t) = a(x,t)exp[iq(x,t)], then ki(x,t) = q(x,t)/xi and f(x,t) = 2pq(x,t)/t is its frequency at point x, time t. Using (1), it can always be associated with a velocity field v(x,t). Remember v depends only on time for “perfect” rigid bodies. Any real body remains subject to deformations, may they be small, so that different parts of the body may not move at exactly the same speed.

Take now two cellular wavepackets y1(x,t) = a1(x,t)exp[iq1(x,t)] and y2(x,t) = a2(x,t)exp[iq2(x,t)] associated with the biological cells. It has been shown in previous bidouilles that ther’s no reason why such wavepackets should not exist, or would vanish for some “suitable” reasons, as they result from constructive interferences of less complex wavepackets (namely, those of proteins). When the two cells interact, their wavepackets interfere. We’re interested in this interference, to see what may or may not happen. We so have a resulting wavepacket:

 

(4)               y = aexp(iq) = y1 + y2 = a1exp(iq1) + a2exp(iq2)

 

Expressing the amplitude a from the amplitudes of the two initial wavepackets and their phase shift is easy:

 

(5)               a² = a1² + a2² + 2a1a2cos(q1 - q2)

 

Important detail I forgot to mention: all amplitudes and frequencies are assumed non-negative, as we’re dealing with matter.

Expressing the resulting phase from the initial ones requires more mathematics. It reveals to be convenient to write this relation with both the half-sum and the half-difference of phases. We first use the property of the exponential function to put (4) under the form:

 

y = a1exp[i(q1 + q2)/2]exp[i(q1 - q2)/2] + a2exp[i(q1 + q2)/2]exp[-i(q1 + q2)/2]

= exp[i(q1 + q2)/2]{a1exp[i(q1 - q2)/2] + a2exp[-i(q1 + q2)/2]}

= {cos[(q1 + q2)/2] + isin[(q1 + q2)/2]}{(a1 + a2)cos[(q1 - q2)/2] + i(a1 – a2)sin[(q1 - q2)/2]}

= (a1 + a2)cos[(q1 + q2)/2]cos[(q1 - q2)/2]{1 – a12tan[(q1 + q2)/2]tan[(q1 - q2)/2]} +

+ i(a1 + a2)cos[(q1 + q2)/2]cos[(q1 - q2)/2]{tan[(q1 + q2)/2] + a12tan[(q1 - q2)/2]}

 

From what we deduce that:

 

(6)               tan(q) = [tan(+) + a12tan(-)]/[1 – a12tan(+)tan(-)]

 

where a12 = (a1 – a2)/(a1 + a2) and tan(+) and tan(-) are respectively short for tan[(q1 + q2)/2] and tan[(q1 - q2)/2].

The process is nothing new: it’s commonly used in modulation. Formula (6) resembles tan(a+b). It does reduce to the tangent of a sum of phase angles when a12 is unity.

In the general case, all phases and amplitudes depend on time and space variables. Never mind, we need find the expression for the resulting frequency f. So, we have no other choice but to derivate (6) with respect to time. Easy but quite painstaking. The result is:

 

(7)               f = ½ (f1 + f2) + ½ (f1 – f2)a12[1 + tan²(-)]/[1 + a12²tan²(-)] + a12’tan(-)/[1 + a12²tan²(-)]

 

with the prime standing for time derivation.

Right. We’ll be back to formula (7) soon, but first have a look at (5). As -1 £ cos(q1 - q2) £ +1, (a1 – a2££ (a1 + a2)². The resulting amplitude is therefore minimum each time the phase shift q1 - q2 is an odd integer multiple of p:

 

WHEN THE TWO CELLULAR WAVEPACKETS HAVE OPPOSITE PHASES, THE RESULTING WAVEPACKET IS THE SMALLEST POSSIBLE.

 

It can even be zero if a1 = a2! We have here a simple possible explanation of the wavepacket reduction at large scales, requiring no arguments on dissipation. Actually, this is somewhat similar to spin opposition in Fermi pairs: the resulting spin is zero. Here, the idea is the same.

On the opposite, a will be maximum each time q1 - q2 is an even integer multiple of p:

 

WHEN THE TWO CELLULAR WAVEPACKETS ARE IN PHASE, THE RESULTING WAVEPACKET IS THE HIGHEST POSSIBLE.

 

To reuse the spin analogy, it’s like if the two spins pointed in the same direction. So, it wouldn’t be Fermi anymore, but Bose. The square of the resulting amplitude is then even greater than the sum of the squares of the initial amplitudes.

In between, each time q1 - q2 will be an odd integer multiple of p/2, there will be no interference:

 

TWO WAVEPACKETS WITH PHASE SHIFT q1 - q2 = (2n+1)p/2, n Î Z, DO NOT INTERFERE. THE RESULTING AMPLITUDE IS JUST a² = a1² + a2².

 

It’s still greater than each of the initial amplitudes, but no greater than their sum. We do have an amplification, but not the best one.

We now turn to (7). When q1 - q2 is constant, the two wave 4-vectors are equal: k1i = k2i. So, the two frequencies are equal: f1 = f2 and the second contribution in (7) vanishes. If, simulatenously, a12 is constant, the third contribution vanishes as well and the resulting frequency reduces to the mean frequency fmoy = ½ (f1 + f2).

A surprise occurs as soon as q1 - q2 varies and a12 is not zero (i.e. a1 and a2 are different), even if the initial amplitudes are constant. Then (7) reduces to:

 

(8)               f = ½ (f1 + f2) + ½ (f1 – f2)a12[1 + tan²(-)]/[1 + a12²tan²(-)]

 

which is not positive definite! As a consequence, we can find f = 0 and even f < 0! f = 0 happens for:

 

(9)               (f1 + f2)/(f1 – f2) = -a12[1 + tan²(-)]/[1 + a12²tan²(-)]

 

Can we have this everywhere, anytime? Yes, since f1 and f2 varies. So:

 

WHEN (9) OCCURS, THE RESULTING FREQUENCY MAY BE GLOBALLY ZERO.

 

Yet, we’re dealing with matter, positive amplitudes and frequencies. The point is not really to find a zero frequency. The point lays in the fact that, if we go back to (1), it will send us back to a zero mass… Yet, absolutely no mass has been destroyed. We only find a wavepacket the frequency of which corresponds to no matter… or the quantum postulate is wrong.

Up to today, this is not the case: this de Broglie postulate has been verified.

Anyway, we are led to the same result (f zero or even negative) in the general situation, where all datas are variable. Some transformations show that f = 0 gives the following Ricatti equation for 1 + a12tan(-) = b12:

 

(10)           b12’ + fmoyb12² - f1b12 + f1 = 0

 

Even if this equation cannot be solved through quadratures, there’s no reason why it should only have trivial solutions. Besides, b12 = cte is verified iff it’s a root of fmoyb12² - f1b12 + f1 = 0 and only involves a12tan(-) = cte.

 

THE RESULTING FREQUENCY MAY BE ZERO OR EVEN NEGATIVE IN THE GENERAL CASE, WHERE AMPLITUDES AND PHASES VARY.

 

In the “less worse” situation, to what could correspond a resulting wavepacket with non zero amplitude, but zero frequency? It’s generated by matter, through matter interaction and no longer corresponds to any material pair!!! 8(((((

Notice this remains true for the wave 4-vector…

 

Well, the only explanation I found so far is:

 

WHAT WE GET IS A PHYSICAL ENTITY (A WAVEPACKET) THAT NO LONGER SENDS BACK TO ANY MATERIAL SUPPORT…

 

The thing becomes even worse when f < 0: then the resulting wavepacket would send back to an antimaterial support!!!

There’s no salvation from thermodynamics: should we take phases angles as the ratios of thermal energies and temperatures, the results would be exactly the same… we would only replace the mechanical frequency with the thermal one, that’s all.

 

Are we opening one more new door or is it just a particular property of two-cell interaction?

 

Let’s have a look at the n-cell interaction. The resulting amplitude satisfies:

 

(11)           a² = ån=1N an² + 2åån<p = 1N anapcos(qn - qp)

 

For the resulting phase angle, we have:

 

y = aexp(iq) = ån=1N yn = ån=1N anexp(iqn)

 

giving:

 

(12)           tan(q) = [ån=1N ansin(qn)]/[åp=1N apcos(qp)]

 

Thus:

 

D²[1 + tan²(q)]f = ån=1Nåp=1N {[anfncos(qn) + an’sin(qn)]apcos(qp) + [apfpsin(qp) – ap’cos(qp)]ansin(qn)}

= ån=1Nåp=1N {anap[fncos(qn)cos(qp) + fpsin(qn)sin(qp)] + (anap)’sin(qn)cos(qp)}

= ån=1Nåp=1N {anap[½ (fn + fp)cos(qn)cos(qp) + ½ (fn - fp)cos(qn)cos(qp) + ½ (fn + fp)sin(qn)sin(qp) - ½ (fn - fp)sin(qn)sin(qp)] + (anap)’sin(qn)cos(qp)}

= ån=1Nåp=1N {anap[½ (fn + fp)cos(qn - qp) + ½ (fn - fp)cos(qn + qp)] + (anap)’sin(qn)cos(qp)}

 

D = åp=1N apcos(qp)  ,  D² = ån=1Nåp=1N ancos(qn)apcos(qp)

 

D²[1 + tan²(q)] = D² + [ån=1N ansin(qn)]² = ån=1Nåp=1N anapcos(qn - qp)

 

Set f = 0. Even for all an constant, we still find:

 

(13)     ån=1Nåp=1N anap[½ (fn + fp)cos(qn - qp) + ½ (fn - fp)cos(qn + qp)] =

= ån=1Nan²fn + ån¹p=1N anap[½ (fn + fp)cos(qn - qp) + ½ (fn - fp)cos(qn + qp)] = 0

 

and we still have non trivial solutions because of the frequency shifts fn - fp.

So, it’s not a particular property of two cells.

And it’s still something different from an energy-free soliton, for we would need have a gyroscopic term in the Lagrangian density of the wavepacket, that would depend on it and couple to its space-time derivatives.

 

No. Here we have “something” we can clearly identify as a physical entity, generated by cell interaction, but “free of matter” (f = 0) or able to deliver a negative energy (f < 0). Now, it does exist, since its amplitude a is zero only when a1 = a2 and q1 - q2 = (2n+1)p, a very specific situation.

 

 

B117: A CONTROL COMMAND CHAIN FOR THE CNS

Le 17/07/2015

We’re at last back to neurobio, as i think i finally understood how the brain works. If what i’m going to talk about today is correct, then the central nervous system does work a completely different way than Turing machines and the two just cannot be compared.

My central pb for long was: “how the hell can neuron networks work, according to what biologists learn us?” There was nothing coherent.

Back one more time. Electrical synapses are easy to understand: the nervous signal can propagate two-day, down or back up, there’s 100% of chance it’s transmitted from one neuron to another one. Chemical synapses are the exact opposite: the nervous signal can only propagate one-way, it vanishes in the presynapse to the benefit of a scattering wave, neurotransmitters may or may not reach their receptors and anyway, cherry on the cake, might the incoming resultant be lower than the threshold, leaving the receiving neuron silencious, this last on can always self-activate opening its Ca2+ channels (oscillation mode).

Question: what the need for a transmission, then?

If you transmit a signal, it’s usually made to be received…

If the network is so made that, whatever the information carried through it, a given neuron can “decide” to remain silencious or activate, then there’s no need for a collective participation.

That’s the reason why I didn’t go further than the neuron function (B36). Besides, still according to biologists themselves, the cortex looks more like a “jungle of connections” than anything “structured”, in the sense of “dedicated”, which does not prevent specialized areas for as much.

Now, it was collectively agreed that chemical synapses were a significant evolution of nervous systems. So, there should be a much more thorough reason for that. Indeed, why would it be a “significant evolution” if the causal link was broken and if there was no dedicated circuitry at all at the global scale?

This is all absurd if we try to reason in terms of time sequences or even recursivity and it confirms that the brain does definitely not work as a Turing machine, i.e.:

 

(1)               {set of given initial datas} -> [convenient program] -> {set of results}

 

Right on the contrary, it works this way (as shown by bioelectrical datas):

 

(2)               {set of results or goals to reach or satisfy} -> [proper graphs] <- {set of needs}

 

In (1), results are unknown or programs would be useless. In (2), needs and goals are known in advance and the system has to find paths in its neuron graph to satisfy these goals. For a given goal to reach, there is not a unique graph: many possibilities exist, that lead to the same result.

Apart from mathematical calculus (the results of which are not known in advance), all more “concrete” goals are based on the principle according to which the system must remain homeostatic. So, it’s all about control-command, a discipline i was not so bad at.

Let’s take an example: thirst.

The need is rehydrating, as thirst comes from a lack of water.

The goal is to reach back a normal hydrating: that’s the result. Our system must remain homeostatic (i.e. regulated) with respect to water.

To satisfy this need and reach this goal, we can draw a control-command chain:

 

(3)                

 

 

 

 

  

 

(N = need, E = error, G = graph, C = command, P = process, R = result)

 

The loop is generally not linear. G and P are operators. We have:

 

(4)               E = N – R

(5)               C = G.E

(6)               R = P.C

 

As for any other control-command system, we must find C = 0 as soon as E = 0, that is, when the goal is reached. The graph operator G acts as long as E > 0, i.e. the need remains greater than the result. As said above, G and P aren’t linear in general. The feedback on R guarantees regulation, so that the whole chain is homeostatic: when the system feels a biological need, the neuron graph is activated to control-command the corresponding process until the feeling vanishes. I’m thirsty (E = N), I drink (succession of orders and motions, E = N – R, R increases, E decreases) until I have enough (E = 0).

In computer science algorithmics, programming is based on two fundamental principles that have to be satisfied: proof of correctness (proof that the program does compute the required function) and proof of end (or stop, proof that the program does not loop).

Here, these two principles remain (whatever the paths in the neuron graph, the graph operator must properly command the process so that the goal is reached within a finite time). In addition, we have the general homeostatic condition over the system (the system must be regulated so that a decrease in something is compensated for by a bring from the surrounding environment or a saturation in something is reduced by a decrease in the corresponding substances).

There are different yet equivalent ways of solving (4-6), depending on which quantity we’re interested in. We can, for instance, solve it for the error E. Then, we get:

 

(7)               E + P.(G.E) = (Id + P.G).E = N

 

A priori, there’s absolutely no reason why the graph and the process ops should commute. If we solve for the command C, we get:

 

(8)               (Id + G.P).C = G.N

 

And if we solve for the result R:

 

(9)               (Id + P.G).R = P.G.N

 

If the operator on the left is reversible, we can express R as a function of N:

 

(10)           R = (Id + P.G)-1.(P.G).N

 

and the final condition R = N can only be reached in finite time iff (if and only if) P.G vanishes. Since, in practice, G induces P (motor system), this normally reduces to G = 0 (the solicited graph turns silencious) at time tend when the chain starts at time t = 0.

 

We don’t care anymore about each neuron composing the graph to transmit for sure the information E coming out of the comparator, all we ask for is this information to be transmitted, one path or another, and produce the proper command on the process.

X-net was conceived for information to reach the receptor for sure whatever the road used.

Somewhat similarly here, paths are chosen so that the information in gives a command out and the whole regulation chain works properly. Remember these paths are not static, but dynamical: should the info was “stuck” somewhere, another path would be taken from that point. As a result, individual transmission abilities become secondary. What becomes relevant is the collective ability to transmit information throughout the whole graph. Errors may occur, they are even a characteristic feature of animals brains, they don’t make a problem considering this: 1) any error occurring during the regulation process appears back into the difference E and is to be corrected at the next passing through G and 2) neuron networks integrate “code correction” in this ability of individual neurons to self-activate. So, if the incriminated neuron remains silencious while it should be active, it can always self-correct opening its calcium channels (unless it’s damaged, of course).

Suppose now such a damage occurs between two processes. Then, plasticity applies. It’s an amazing property of evolved systems that they are able, whether to use other paths to reach the same goal (self-adaptation) or to make new connexions between sane cells that will serve as many new paths.

This is simply unreachable with inert materials entering prefixed programs.

Only when the damage is too large (involves too many neurons) or localized but touching critical areas will the brain be unable to satisfy the corresponding needs.

 

When we take for G the whole neocortex, the induced process P is called “consciousness”. We have a (large) set of needs in and a (large) set of goals-to-reach out.

Comas correspond to G = 0 at different scales. The larger the scale, the deeper the coma.

Cerebral death corresponds to G = 0 globally. Eqs (7-9) then give E = N (no need can be managed by the system on its own), C = 0 (control-command off), R = 0 (no reaction of the system). Let t = 0 be the instant where this situation occurs. We then have G(0) = 0.

Apart from artificial coma, all we can do for the time being is to plug-in artificial substitutes to the natural command C to maintain vital processes. We can act upon P, but not G. So, I have to shunt over G with an artificial regulating system, say A and G becomes G’ = G + A in the above equations. When the system works on its own, A = 0 and G’ = G. When it does not anymore, G = 0 and G’ = A.

The question becomes really interesting when, after a delay T, the system restarts on its own, while nobody expected it. Quite funny…

Because the loop (3) is rather general. When there’s no serious damages, G(0) = 0 looks like a “secure mode”. This can be understood as the brain, with the heart, are the two most vital organs in the body so that the CNS will preserve itself the best it can against dysfunctions.

 

What will make it go out of this secure mode, if it can no longer make any decision on its own, since all its graphs are off?...

 

The only way out is some external reactivation. Not necessarily external to the being, but at least external to its biological body. Whether another Being or the same being in another state.

I see no other solution.

 

 

 

 

 

Minibluff the card game

Hotels