blogs.fr: Blog multimédia 100% facile et gratuit

doclabidouille

Blog multimédia 100% facile et gratuit

 

BLOGS

Blog dans la catégorie :
Sciences

 

Statistiques

 




Signaler un contenu illicite

 

doclabidouille

B123: (PARTIAL) CONCLUSION ON NC GEOMETRY

Le 01/09/2015

 

I’m gonna stop there, for the time being, with non commutative geometry, and turn to something else, much easier to grasp and also much clearer. Field theory in a NC geometry can yield to extended properties of matter and interaction fields, but we quickly loose ourselves in technical details and anyway, I doubt it can have possible applications to large scales and the present universe. Should it be useful to something, it would probably concern the (very) early universe. Obviously, through cascades of symmetry breaking, it could then help understanding organization of our observable universe at different stages. But this has to do with cosmology, not parapsychology.

I shall only give a general method for constructing the non commutative version of the Riemann integral. The easiest way is to start from the definition of the first NC derivative of a NC-scalar function Fi(X) of a NC-vector variable X, formula (1), last bidouille. Integration (i.e. continuous summation) over dFi(X) leads to:

 

(1)               Fi(X) = Fi(X0) + òX0X dFi(Y) = òX0X [Tr(dY.)]Fi(Y)

 

where the limits X0 and X have to be taken matrix-valued [elements of M4(R)]. In components:

 

(2)               Fi(xlm) = Fi(x0lm) + òx0x dyjkkjFi(ylm) = Fi(x0lm) + òx0x dyjkF’kji(ylm)

 

with x0 and x to be understood as (x0)lm and xlm and kj = /ykj is the derivative with respect to the integration variable ykj.

Following this procedure, multiple integrals as well as the Lebesgue integral should not be difficult to extend.

 

 

 

B122: NC DYNAMICS - 3

Le 12/08/2015

I was right not to insist any longer yesterday, as i faced a little but unimportant technical pb.

When doing geometry, the most frequent difficulty lays in mental representation of things: as long as your mental image is not clear, you face difficulties.

I should have noticed this:

 

NC-SCALARS ON M4(R) ARE DIAGONAL MATRICES.

 

As simple as that, and it greatly simplifies things, as we can extend all constructions in commutative tensor theory (symmetric and skew-symmetric products, traces and invariants) without difficulty.

Whatever we do in M4(R), we should keep in mind that space-times M4j are actually states of the space-time M4. As long as we don’t “lift degenerescence” by any means, there seems to be a “single” space-time M4, more precisely, a 4D space-time in a single state. This is the so-called “degenerated state” of space-time. “Lifting degenerescence” means we separate energy levels so that different states appear. These physical states can correspond to configurations. In the SU(3,1) unified gauge model [or even U(3,1), if we lift the restriction on unimodularity, which does not restrict generality for as much], 4 such states or configurations are considered. Each of them is assigned a 4D space-time M4j (j = 1,2,3,4): it’s nothing else than M4 “in the j-th state or configuration”. “Reality levels”, would say some.

So, fixing ourselves a scalar quantity, say x, on M4 undermeans this scalar is somehow “degenerated”: it’s a mixing of purer states. If we separate the 4 states of M4, we find a 4-vector quantity xj: for each state j, xj is a C-scalar on M4j.

On another hand, we can always extend any scalar quantity into a 4-vector. Let x1 be such a C-scalar on M41. It’s equivalent to the 4-vector of components (x1,0,0,0). Take x2 another C-scalar on M42, it corresponds to it the 4-vector with components (0,x2,0,0). Similarly, we have x3 = (0,0,x3,0) on M43 and x4 = (0,0,0,x4) on M44. It amounts exactly to the same to saying the xjs are “states” of x on “states” M4j” of M4.

“Gluing together” those four 4-vectors, we obtain a diagonal matrix (or tensor) on M4(R): this is what we call a “NC-scalar” on M4(R). Writing X = (xij)i,j=1,2,3,4 a matrix on M4(R), it’s always possible to identify a C-4-vector (xi)i=1,2,3,4 of M4 with the diagonal matrix D(X) = (xii)i=1,2,3,4, so that we have, in components, xi = xii (i = 1,2,3,4).

 

And this considerably simplifies our task.

 

To begin with, consider a C-scalar function f of the C-scalar variable x on M4. The image y = f(x) is again a C-scalar on M4. All these scalars must now be seen as “degenerated” quantities of NC-scalars xi, yi and fi on M4(R), so that yj = fj(xi): commutatively, we now find a 4-plet of scalar functions fj, each depending on the 4-plet of scalar variables xi. Or else, a 4-vector field over M4. If we now identify all these 4-vectors with NC-scalars on M4(R), i.e. diagonal matrices, we equivalently find yjj = fjj(xii): this is a particular case of the much more general functional relation yij = fij(xkl) on M4(R), between NC-quantities.

 

ANY VECTOR FIELD OVER M4 IS A NC-SCALAR FIELD OF A NC-SCALAR VARIABLE OVER M4(R).

 

Consider now a C-tensor yijk of order 3 on M4. We can rewrite it as (yi)jk. On M4(R), yi is a NC-scalar, while (.)jk is a NC-vector. As a result, yijk transforms as a NC-vector on M4(R), just like yij. A single 4-component index plays no role in the transformation on M4(R).

Let’s take an example, to be clear.

Take a C-scalar a and a C-vector xi on M4. The tensor product of a and xi identifies with algebraic multiplication: aÄxi = xiÄa = axi = xia. A tensor of order 0 (i.e. a scalar) on M4 adds no order to a given tensor through the tensor product. What we have is a dilatation of xi a factor a: if |a| > 1, it’s a dilatation; if |a| < 1, a contraction. In no way will we make a matrix out of such a product.

Well, the same holds in M4(R). The tensor (or the matrix, whatever) product of a NC-scalar ai and a NC-vector xij on M4(R), not only is commutative, since ai identifies with the diagonal matrix (aii)i=1,2,3,4, but gives a tensor (or matrix) of same order, as to know aijxjk = åi=14 aiixik = yik.

 

THE NC-PRODUCT OF A NC-SCALAR WITH A NC-TENSOR OF ANY ORDER IS COMMUTATIVE AND GIVES A NC-TENSOR OF SAME ORDER.

 

The difference with the commutative situation is striking: a C-tensor Ti(1)…i(2p) of order 2p and a C-tensor Ui(1)…i(2p+1) of order (2p+1) both transform as a NC-tensor Wi(1)…i(p) of order p. If you had to transform back, giving you a NC-tensor of order p, what would you choose, a C-tensor of order 2p or 2p+1? There’s a freedom there, equivalent to saying in the commutative, that C-vectors xi and axi, with a ¹ 0, “belong to the same class” or that xi “is defined up to a multiplicative factor a (or a scale a)”. In M4(R), a “scaling coefficient” is a 4-vector on M4.

 

THERE’S AN EQUIVALENCE CLASS IN M4(R) BETWEEN 2p-TENSORS AND (2p+1)-TENSORS ON M4 IN THE SENSE THAT THEY BOTH LEAD TO A p-TENSOR ON M4(R).

 

Remain careful with the sets you work in: a p-tensor on M4 is a C-tensor; on M4(R), it’s a NC-tensor.

 

We can now talk about non-commutative differentials and differential forms on M4(R). This helps describing and understanding local properties of bodies, motions or even frames. Global properties are described by integration theory.

 

Let Fi(xj) be a NC-scalar function of a NC-scalar variable xj on M4(R). The differential dxj of xj is a small variation around xj. So, it’s one more NC-scalar (dx)j, equivalent to the diagonal matrix (dx)jj the non-zero components of which are all small variations around each of the xj = xjj. In clear, we have a 4x4 real-valued matrix made of zeros off-diagonal and of dx1, dx2, dx3 and dx4 along the diagonal. However, d being a C-scalar operator, it becomes meaningless in M4(R), so dxj is actually not the differential of xj: what’s meaningful is the NC-scalar or C-vector (dx)j. This being said, we have, with a slight abuse of notation that should have no consequences, precising the context: Fi[xj + (dx)j] = Fi(xj + dxj) = Fi(xj) + (dF)i(xj). The quantity (dF)i = dFi is assumed to be a small variation of the function, of same order than that on the variable. We’d like to express it in terms of the derivative of Fi. For that, we would write contributing terms in the same order as for matrix product:

 

(1)               dFi = dxkjjkFi = [Tr(dX.)]Fi

 

This is the general expression for the first derivative of a NC-scalar function of a NC-vector variable on M4(R). In particular, when X is diagonal, we find:

 

(2)               dFi = (åj=14 dxjjjj)Fi = (åj=14 dxjj)Fi

 

since only the diagonal terms jj of the matrix jk = /xjk contribute. It follows that, for a NC-scalar function of a NC-scalar variable:

 

(3)               jFi(xk) = Fi(xk)/xj = F’ji(xk)

 

is no longer a NC-scalar, but a NC-vector function. This is quite easy to understand: once again, C-scalar variables and functions can be seen as “degenerated” and so will be their derivatives at any order (i.e. as long as the function is derivable); on the opposite, NC-scalar functions and variables are 4-vector fields on M4. So, to each state M4j of M4 is now associated a derivative of Fi and this is what (3) expresses: jFi(xk) = jFi(x1,…,x4) is the derivative in M4j of the scalar function Fi in M4i. As we now have 4 states of M4, we find 4 states of any scalar function over M4 and 4 states for the derivative of each state of F, giving 4x4 = 16 derivated numbers at each point of M4 where F is derivable.

Still more generally, we know a C-1-form on M4 is a C-scalar infinitesimal quantity:

 

(4)               a = ai(xj)dxi

 

that remains invariant under coordinate transformations (a appears the same in all coordinate systems of M4). The coefficients ai(xj) of this 1-form are not necessarily derivatives if(xj) of a C-scalar function f(xj). When this is so, a is said to be “exact”: it’s simply the differential of f; otherwise, it’s “inexact”.

A NC-1-form on M4(R) will be a NC-scalar infinitesimal quantity:

 

(5)               Ai = Aijk(xlm)dxkj = Tr[Ai(X).dX] = Tr[dX.Ai(X)]

 

Similarly, if Aijk(xlm) = jkFi(xlm), Ai = dFi and Ai will be said “exact”. If not, “inexact”. Assume now that X is diagonal: X = (xii)i=1,2,3,4. Then (5) will give:

 

Ai = åj=14 Aijj(xll)dxjj = åj=14 aij(xl)dxj

 

Is it (4)? Yes, if we take into account that we now must have 4 C-1-forms, one on each M4i. As soon as we “degenerate”, Ai reduces to a single component and so do its coefficients aij(xl): that’s precisely (4) on M4.

Let’s move on. A C-2-form on M4 is a C-scalar invariant quantity:

 

(6)               f = ½ fij(xk)dxiÙdxj  ,  fij = -fji

 

When f = da is the outer derivative of a C-1-form a like (4), that is, when fij(xk) = iaj(xk) - jai(xk), then f is closed: we have the Bianchi identities ifjk + jfki + kfij = 0, that can also write df = 0 independently of any basis or d(da) = d²a = 0.

A NC-2-form on M4(R) will write:

 

(7)               Fi = ½ Fijklm(xnp)dxmlÙdxkj  ,  Fijklm = -Filmjk

 

When Fijklm = jkAilm - lmAijk, Fi will be closed: Bianchi identities are obvious. Let’s take X diagonal, (7) reduces to:

 

Fi = ½ åj=14åk=14 Fijjkk(xll)dxkkÙdxjj = ½ fijk(xl)dxkÙdxj = -½ fijk(xl)dxjÙdxk

 

Again, this is (6) under 4 states, with a change of sign due to our choice of ordering components in (7). This sign being global, it only changes orientation on all M4. As orientation is a choice of ours, it changes nothing on the physics of M4, but the convention we gave ourselves (if the change of sign was local, it should be completely different).

The quantity dxiÙdxj is a surface element on M4, making a skew-symmetric coordinate 2-tensor dsij = -dsji: 6 components only over 16, all in m² (2-forms are infinitesimal quantities of order 2). That’s a NC-vector on M4(R). This NC-vector corresponds to the 6 plans of M4: each component of dsij is on a M4-plane. I’ve established yesterday evening a correspondence between this surface element on M4 and the dxijs on M4(R):

 

(8)               dxiÙdxj = dsij = ½ gkl(dxildxkj – dxjldxki) = ½ Tr[(dX)² - (tdX)²]ij

 

where dX = (dxij)i,j=1,2,3,4 and tdX = (dxji)i,j=1,2,3,4 is the transpose matrix (obtained from dX inverting lines and columns). It seems to work. On the right, we have a skew-symmetric tensor product of all the dxijs, making a NC 2-tensor dSiklj = ½ (dxildxkj – dxjldxki) we take the (kl)-trace of. On the left, we have a skew-symmetric tensor product of all the dxis = dxiis, making a C-2-tensor dsij = dxiÙdxj.

Notice in passing that the metrical tensor gij on M4 is a constant symmetric C-2-tensor on M4 and therefore a NC-vector on M4(R), with 6 zeros.

 

We have all the base ingredients for a generalization to n-forms. Higher orders follow the same procedure. I have also, for instance, established the equivalence (8) for a volume element on M4, it’s:

 

(9)               dxiÙdxjÙdxk = (1/3!)glmn[(dxindxmj – dxjndxmi)dxkl + (dxjndxmk – dxkndxmj)dxil + (dxkndxmi – dxindxmk)dxjl]

 

that is, ordinary cyclic permutation. One last formula for the 4-volume element dxiÙdxjÙdxkÙdxl and that’s all. We can’t go over in M4.

We can’t go over in M4(R) either, for what concerns NC-forms: NC 4-forms are the highest we can build. All higher order NC-forms should be identically zero.

 

 

 

 

 

B121: NC DYNAMICS - 2

Le 11/08/2015

Before going further with dynamics, we first need a bit more geometry. To keep the link with physics, we’re going to work in dimension d = 4. All the results established hereunder will still hold in dimension d.

We begin with noticing that, if xi (i = 1,2,3,4) is a point of M4 (endowed with the structure of an affine space-time), then xij = (xi)j is a point of M16. Indeed, we can always identify M4 with, say, the j = 4 component M44 of M16 = M41xM42xM43xM44. The four M4j have same dimension 3+1. As M16 can be rendered isomorphic to M4(R) (up to certain restrictions we will see examples of below), the set (xij)i,j=1,2,3,4 will represent a point of M4(R), endowed with an affine structure (as a 16D vector space-time). Take now xi = (0,0,0,x4): it’s equivalent to a (here time-like) scalar on M4. To this scalar will then correspond the 4-vector (x4)j = x4j of M16. Now, we have a two-way correspondence between M16 and M4(R). We therefore deduce that:

 

A NON-COMMUTATIVE SCALAR ON M4(R) IS FORMALLY EQUIVALENT TO A COMMUTATIVE 4-VECTOR (WITH REAL-VALUED COORDINATES).

 

The physical consequence of this is important. It means that there is no “scalar” in the non-commutative sense, as there is in the commutative one: all commutative scalar (C-scalar, in short) becomes one of the four component of a still commutative 4-vector, which sends then back to the appropriate non-commutative scalar (NC-scalar).

There also a purely algebraic reason to this. Take two non-zero real numbers a and b: these are real-valued C-scalars. The two products ab and ba are again non-zero real-valued numbers, so that they belong to the same set as their components. Moreover, whatever the values of a and b, you’ll always have ab = ba: the set of real numbers is commutative and so are the vector and affine spaces related to it.

Take now two non-identically-zero real-valued NC-scalars xi and yj. Each of them is an element of R4. Since i and j are strictly positive integers, xi and yj are also functions of these indices: xi = x(i) and yj = y(j). So, we can also see them as two C-scalar functions of the indices. We will obviously keep on having xiyj = x(i)y(j) = yjxi = y(j)x(i), but we won’t have xiyj = xjyi unless x = y, i.e. the two C-scalar function are identical.

As both xi and yj are NC-scalars, this property should justify the name “non commutative”: the product of two NC-scalars makes an asymmetric matrices. Whereas none of xiyj and xjyi belongs to R4 (as matrices), xi, yj and their products are all in M4(R).

 

Step 2. Take xi a NC-scalar. The 4-plet (xi)j = xij becomes a NC-4-vector on M4(R): xij is the i-th coordinate of a point on Mj.

 

A NC-VECTOR ON M4(R) IS FORMALLY EQUIVALENT TO A C-TENSOR OF ORDER 2 WITH REAL-VALUED COEFFICIENTS.

IT’S ALSO EQUIVALENT TO A REAL-VALUED C-VECTOR IN DIMENSION d² = 16.

 

Generalization is easy and we quickly get the following result:

 

ANY NC-TENSOR OF ORDER n ON M4(R) IS FORMALLY EQUIVALENT TO A C-TENSOR OF ORDER 2n OR TO A C-TENSOR OF ORDER n IN DIMENSION 16.

 

We should however be careful with going over the correspondence between M16 and M4(R) and trying to identify objects of both space-times. This is a typical example.

Let (D) be a C-line on M4. (D) has (inhomogeneous) equation aixi + b = 0, with ai and b real-valued coefficients. It’s an object of C-dimension 1.

Let now (D’) be a C-line on M16. Choosing new major indices I running from 1 to 16, the equation for (D’) can write aIxI + b = 0. Then, identifying I with the pair of indices (ij), we would find aijxij + b = 0.

 

This is not the equation for a NC-line on M4(R).

 

The correct equation for a NC-line (D”) on M4(R) is:

 

(1)               aijkxkj + bi = 0

 

Reason n°1: b is no NC-scalar; n°2: the matrix product of a and x is aijkxkj, not aijkxjk. As said above, xij = (xi)j is the i-th coordinate of a point on Mj, whereas xji = (xj)i is the j-th coordinate of the same point, but on Mi. So, there’s absolutely no reason why we should have xij = xji when i ¹ j. Commutatively, (1) don’t even give a 4-plet of C-lines on M16, since this 4-plet would write aiJxJ + bi = aijkxjk + bi = 0.

 

A NC-LINE (D”) ON M4(R) HAS C-EQUIVALENTS IFF aijk = aikj IN (1). IN THIS CASE, (D”) CAN BE MADE EQUIVALENT TO A SET OF 4 C-LINES IN M16.

 

And still: 1) I wouldn’t try, for properties of M16 and of M4(R) are completely different and 2) it’s useless… It’s useless, because we don’t have a 16D space-time, but a 4D-space-time in 4 states: each M4i (i = 1,2,3,4) is a state of M4.

 

To compute distances or lengths, areas, volumes, etc. we need differential forms. Before talking about them, we first consider the coordinate matrix X = (xij)i,j=1,2,3,4. It’s a coordinate system on M4(R). Since it has C-dimension 16, it has NC-dimension 4. The set of real numbers R being a field of characteristic zero, the algebra M4(R) is naturally endowed with the structure of a real-valued vector space and can then be associated an affine space. This means that X represents a point on M4(R):

 

ANY POINT ON M4(R) HAS 16 C-COORDINATES AND 4 NC-COORDINATES.

 

We’ll continue tomorrow.

 

 

B120: NON-COMMUTATIVE DYNAMICS - 1

Le 08/08/2015

Excellent. All ingredients I’m gonna need today are contained in B114 and the end of B116, where I wrote: “it’s light-years away from our practical considerations”. Well, I could have been a bit too pessimistic…

There’s a connexion between what I’ve done in B114 and a much older work, dating 2007, on the unification of fundamental interactions, that might have unsuspected impacts on our purpose.

Non-physicists should understand that most of the arguments used at our scales must find justifications at much lower scales. Thus, all so-called “phenomenological theories” of physics find their justifications in quantum theory and the microscopic. So, keeping the foundation problems apart, investigating the structures of the quantum is actually everything but a waste of time and reveals mechanisms that will lead to macroscopic behaviours.

Now, back to this 2007 work. It’s was about building a unified model of the 4 known gauge interactions in a 16D space-time, with 12 space-like dimensions and 4 time-like ones. The guide, at this time, was the Kaluza model (without the Klein hypothesis). In this context, i showed how a system of 16 local coordinates could give the correct number of gauge potentials back and how they could couple to each other.

The connexion with B116 appears when we take SU(3,1) as isospin group. As I said, the presence of a 4-state particule sends back to tensor coordinates xij in place of the vector xis. As indices run from 1 to 4, there are indeed 16 coordinates.

There are two ways of building such a 16D space-time: whereas as the tensor product M4+ÄM4- of two 4D Minkowski space-times, or as the Euclidian product M41xM42M43xM44 of four 4D Minkowski space-times. In physics, the tensor product generally stands for non interacting space-times, whereas the Euclidian product indicates interacting (coupled) space-times. So, still from the physical point of view, it’s mathematically equivalent to consider a 2-state space-time with independent components or a 4-state space-time with coupled components. If we take xi+ and xi- for local coordinates on M4+ and M4-, respectively, we should expect their product xi+xj- to stand for a local coordinate system on M4+ÄM4-. Now, this product is in m², not in meters. So, if we do that, we have a problem of physical units. To preserve them, we have no other solution than to introduce tensor coordinates xij (in meters) such that:

 

(1)               xi+xj- = xikxkj

 

This gives a first indication of a 4D non-commutative space-time, as (1) can easily extend to squared distances xikxkj that no longer decompose into a product (“irreducible tensor coordinate systems). In terms of space-times, we now consider space-times that are still 4D, but in a “non-commutative” sense and that do not necessarily reduce to the tensor product of two commutative 4D space-times. Such space-times are vector spaces on the algebra M3,1(R) of pseudo-Euclidian 4x4 matrices with real coefficients. M3,1(R) is finite-dimensional, of dimension (3+1)² = 16. It’s precisely isomorphic to this 16D space-time above.

All these isomorphisms actuallt enable us to introduce the notion of non-commutative dimension without ambiguity. We are used to take for the dimension of a space a scalar quantity d. This is a commutative definition of the dimension. So, when we say that a non-commutative space such as M3,1(R) “has dimension 16”, we actually make a correspondence between this space and a commutative space with the same number of dimensions (and therefore, isomorphic to it). A space with local coordinates yI (I = 1,…,16). In mathematics, this can show interesting. In physics, it’s not, unless we find any justification to these 12 additional dimensions.

To avoid this difficulty, I prefer to introduce the notion of non-commutative dimension. I also think it’s better appropriate to the non-commutativity of M3,1(R). If we now say that M3,1(R) “has non-commutative dimension 3+1”, we then deduce that:

 

1 NON-COMMUTATIVE DIMENSION º 4 COMMUTATIVE DIMENSIONS

 

(º: formally equivalent to). Indeed, we have xij = (xi)j, so that, going from xi to xij amounts to going from a scalar quantity to a vector one. This also holds for the dimension, which becomes a vector quantity. Generally, a vector dimension d = (d1,…,dn) is associated to a set of n vector spaces Vi, each of (commutative) dimension di. For n = 4, we recover the set of 4 Minkowski space-times M4i of the Euclidian product above. There, d1 = d2 = d3 = d4 = 3+1 and we can replace our 16D unified space-time with a non-commutative space-time M3,1(R) with a non-commutative ( = vector) dimension 3+1.

Physically, this has an important consequence, since a non-commutative line becomes formally equivalent to a commutative 4-volume:

 

NON-COMMUTATIVE LINE º COMMUTATIVE SPACE-TIME VOLUME

 

In comparison, superstring theory proposes to replace a commutative line with a (still) commutative surface…

 

See? The physical justification of such frames that can be used at all scales lays, as usual, in relativistic quantum field theory. Even if this theory should hold only at the microscopic level, it introduces new notions that enable us to define new quantities and extend the commutative properties of macroscopic physics… J

 

We can now turn back to the classical dynamics of macroscopic bodies. A trajectory in non-commutative Euclidian space M3(R) is a function xij(tk). Indices still run from 1 to 4.

Shall I again justify this?

We use a Yang-Mills boson interacting model with gauge group SU(3,1). The particle current is a 3-tensor pijk = ½ iħ(ykiyj* - y*kiyj). It’s a density. It corresponds to it an energy-momentum tensor Pijk = pijk/yly*l = mvijk, since all states represent the same particle of mass at rest m. Therefore, vijk is the velocity tensor in the classical sense, with vijk = xij/tk. We have x0j = ctj, so that v0jk = cdjk.

Consequently, xij(tk) is no longer a commutative curve (a single time parameter), but a tensor field over 4D Euclidian time hypervolume. So, it’s a non-commutative curve, developing in non-commutative time.

It’s easier to use the traditional definition of the surface element. Whatever the gauge group, the Lagrangian density of a quantum interacting system remains a scalar quantity and so does the surface element ds². For SU(3,1), we thus have:

 

(2)               ds² = dxijdxji = c²dtjdtjdxjdxj = c²dtjdtj(1 – vjkvjk/c²)

 

with each of the dxj a 3-vector (j = 1,2,3,4) and vjk = dxj/dtk, the corresponding velocity 3-vector. This makes vjk a matrix (or a tensor, whatever) of 3-vectors. dtjdtj is the Euclidian square of the 4-component time vector dtj. vjkvjk is the trace (sum of diagonal terms) of the square of the matrix vjk. The metric used to upper and lower indices is the usual Minkowski metrical tensor gij. (2) is a matrix generalization of ds² = c²dt² - dx.dx in commutative M4.

It follows that the free Galilean motion of a rigid body of mass m can be described through the Lagrange function:

 

(3)               Lkin = ½ mvjkvjk

 

It should not be difficult to make yourself sure that the Lagrange equations of motion are:

 

(4)               (/tk)L/vjk = L/xj

 

Applied to (3), this leads to:

 

(5)               pjk/tk = 0  ,  pjk = mvjk = mdxj/dtk

 

For m = cte, we get ²xj/tktk = 0, general solution:

 

(6)               xj(tk) = Kj/tktk + ½ ajkltktl + bjktk + cj  ,  ajkk = Tr(aj) = 0  (j = 1,2,3,4)

 

where Kj, ajkl, bjk and cj are constants with the aj traceless. As the square tktk is Euclidian, Kj/tktk is the Newtonian behaviour in Euclidian dimension 4. The three other terms describe confinement.

Besides, a remark about this. Confinement does not seem a specific feature of QCD: we find it since Maxwell. What we did is that we neglected the polynomial terms in the solution for Maxwell fields: we only kept the converging kernel. It rather seems to be a general feature of field theory and it explains why we should find confinement in the weak model as well.

Anyway, we can see in (6) that free motion is no longer uniform, as it is with a single time parameter: the Newtonian contribution makes it tend towards zero at time infinity, whereas the confinement term makes it diverge. In the physical reality, there are necessary time values for which an equilibrium between these two antagonist contributions occurs. A compromise is found. The velocity matrix is:

 

(7)               vjk(tl) = -2Kjtk/(tltl)² + ajkltl + bjk

 

It’s clear there are time values when this matrix vanishes, before inverting, meaning that, even free, the motion cannot go over a certain distance Xj, corresponding to vjk(tl) = 0. Acceleration is:

 

(8)               ajkl(tm) = -2Kj(gkltmtm - 4tktl)/(tmtm)3 + ajkl

 

But, wait a minute. The situation is a bit more complicated than in the commutative case. Take bjk = 0 for simplicity. Then vjk(tl) = 0 happens when:

 

(9)               ajkl = 2Kjdkl/(tmtm

 

And what if ajkl is not reversible?... Then, (9) has no solution, meaning the free motion never stops. Perpetual motion until something comes to slow the body down.

 

We’ll see the forced motion next time.

 

 

B119: SENT BACK TO (BAD) OLD QUANTUM MEASUREMENT PROBLEM...

Le 25/07/2015

We now have what can be called without exageration a HUGE worry, the kind that cannot be solved in 48h of time, as we are touching the most delicate (and still controversed) point of the foundations of quantum mechanics: the definition of the wavefunction itself, through the observability problem.

I read Schrödinger and Feynman once again. It’s an excellent exercise to go back to sources when you’re stuck. The fundamental property of measurement at microscopic scales is rather easy to enounce: if you voluntary restrict yourself to the mere observation of the final impacts of particles on a screen, what you get there is an interference curve, with fringes; but, as soon as you want to make your understanding of the process finer, trying to determine the paths the particles have taken from their emitting source to the screen, the fringes disappear.

In other words, as long as you don’t observe particles, only the final result, these particles behave like waves; as soon as you observe their motion, they behave like corpuscles.

Most of theoreticians deduced from those Young’s experiments that observation or, what amounts to the same, the presence of the observer, suffices to destroy interferences, deeply modifying the behaviour of particles. Or, which is equivalent, that the observer did himself interfere with the system in such a way that he destroys the initial interferences. In what way? Nobody knows. We only talk of a “collapse” of the wavefunction, but we’re still unable to give a consistent mechanism behind this.

Many theoreticians of Schrödinger’s time did not share his opinion about the wavefunction being able to potentially represent any physical object in Nature, whatever their size. Some, like Bohr, Heisenberg or Born, despite amongst the founders of quantum theory, were convinced it was nothing else but a convenient mathematical (i.e. abstract) tool to calculate probabilities, without any deeper physical content. No “physical reality”. For these people, and many others after them, measurement was the only meaningful process and the values obtained in final results, the only “touchable” reality.

As for myself, I’m not fully convinced observation is the true problem: in all cases, we observe the impacts on the screen. So, we can as well observe the interference fringes (or we wouldn’t talk about them) or the “smoothed classical” curve. The “sudden reduction” arises when we add a complementary observation “inside the box”. When we try to know which path a given particle of the beam could well take to reach the screen.

Put differently, as long as we stick to the final result, we don’t modify the essential nature of the object we’re experimenting in any way; if we are more curious, things immediately reduce and we loose all the informations about the physical reality of this object.

“Classical” physics asserted that the physical reality of substantial objects was strictly corpuscular: any substance was made of “corpuscles”. Waves had nothing substantial, they rather were processes between substances.

The rising of so-called “wave mechanics” deeply transformed this vision of Nature. Young’s experiments showed without ambiguity that, at least at microscopic levels, neither substantial matter nor even radiations could be said to be “corpuscles” or “waves”. On the contrary, it revealed that their true physical reality was none of them: we can no longer talk of the electron as a “corpuscle of matter” if it starts to behave like a wave as soon as we don’t disturb its motion (direct observation); we can no longer talk of the photon as an “elementary electromagnetic wave (or radiation)” if it starts to behave like a corpuscle as soon as we observe it directly.

For Schrödinger and many others, the physical nature of microscopic objects now depended on what the observer did or didn’t do.

I cannot criticize this approach, as they tried to interpret as faithfully as possible what went straight against all our conceptions on objects and processes so far.

But the least we can say is that it has absolutely nothing “universal”…

I just cannot satisfy myself of a physics “depending on the observer’s will”. I’m not partisan of “hidden variables” either: violations of Bell’s inequalities have been clearly established.

On another side, despite he let me the feeling his mind about it had evolved from his fundamental work in 1926 to the 1950s, Schrödinger seemed to be deeply convinced that his concept of a “wavefunction” did not apply to the microscopic only, but to all scales. He was convinced that it did have a physical reality. But, at the same time, he was perfectly conscious from the start that it was highly unstable. So unstable, actually, that the smallest disturbance or the first measurement on it, not only modified it, but destroyed it! In his lectures, he clearly says the wavefunction no longer exist after a measurement: it simply disappears and is replaced with a new one, just after the measurement. He insists on the fact that it’s not a question of time evolution, that time has nothing to do with that and the only role time can play there is to only make the situation worse in the future!

Schrödinger was strongly influenced by the works on statistical physics in the second half of the 19th century, and especially by Gibbs. His fundamental equation of wave mechanics shows it: it has the structure of a scattering equation with a complex scattering coefficient. Many attempts, including mine, have been made to formally derive this equation. None of them are fully convincing so far. Contemporary statistical physics managed to explain why the transition from “classical” to “quantum” had to go through exponentiation, basing its argumentation on the “partition function” of “classical” statistical physics. But it still remains to explain where the amplitude of the quantum signal can well derive from…

We only say it’s a “probability amplitude” because experimental results show it: it’s only heuristic. We still haven’t got any physical mechanism behind, to complete the transition on the phase.

Anyway, the whole present construction sounds everything but consistent. Take the problem of the previous bidouille. Whatever the objects now, even particles, we start with interference. This undermeans we do not directly observe the system. We only observe the results it gives. We now observe it directly. The interference term should vanish. However, we now make a 3-body system: the two first bodies + the observer, right? They are all assumed to interfere, according to the principles of wave mechanics and quantum measurement theory. So, this gives us 3 wavefunctions (one more, the observer’s). That’s one more amplitude a3, on more phase q3, to combine with a12 and q12. We find similar formulas for the resulting amplitude and phase. Now, this should give in the end a1² + a2², since interference is destroyed.

What should honestly justify both a3 and q3 so take such “suitable” values, as long as we observe the system, that the result is the vanishing of interference between y1 and y2?...

Okay. Forget about a possible observer’s wavefunction acting. Then the measurement process should be such that, all along it, the phase shift q1 - q2 should become equal to an odd multiple of p/2, namely, (2n+1)p/2, n Î Z, everywhere inside the system. Again, why? How observation could modify the dynamics of the phase shift? In what way?

You won’t have lost sight that the wavefunction was defined as a probabilistic distribution in Euclidian space along time: y = y(x,t). So, it remains a signal under its conventional form. This was justified by the fact that all “useful” observations happen in ordinary space and evolve in ordinary time: what might or might not happen outside this frame is normally unreachable to the observer, and therefore considered “useless” or “meaningless”.

Quantum mechanics was made as a, if not the, physical theory of measurement and observation. The only quantities that matter are called “observables”. And when people extended its principles to space-time relativity, they agreed in saying that the Galilean concept of Schrödinger’s wavefunction couldn’t hold anymore as such and has to be modified to satisfy the transformation properties of the larger Lorentz rotation group and, overall, the finiteness of the speed of light: the “wavefunction” thus became more a “statefunction” or some “field operator” acting on population states (or equivalently, energy levels).

Somewhat ironically, Galilean wave mechanics derived from Planck’s work on oscillators and couldn’t reach the goal of properly describing a system of oscillators confined into a box when time-relativistic effects become non-negligible…

Besides, Schrödinger stayed reluctant to believe in a possible “collapse” of his wavefunction into discrete values (“quantum jumps”), despite he defended the results obtained by Planck…

To end this discussion, let us recall that Prigogine, long after Schrödinger, based his own arguments on deterministic chaos and the possible transition from “classical” to “quantum” (and back) through chaos to give much finer explanations on the structural instability of the wavefunction as an essentially local dynamical object.

We now represent y(x,t) not as a “wavefunction” or a “state function”, but rather as a trajectory in some “wave space”. However, we translated the difficulty in determining if this “wave space” has any physical reality or is merely one more mathematical tool.

 

Since De Broglie suggested to associate a wave to any corpuscle, I now wonder why Schrödinger did not try to change frame and apply the principles of statistical physics to the new one, may the final results be found in conventional 3-space. Just to see. Instead of that, he nearly applied “bluntly” those principles to quantum waves, while staying in E3.

Let’s instead consider a wave space. This is a functional space over E3 and the real line R (for time). A “local coordinate” on this wave space is a pair [y(x,t), y*(x,t)] since quantum waves hav to be complex-valued (as Feynman pointed out in his lessons, opposite to the situation in classical physics, real-valued waves are not sufficient in quantum mechanics, we also need their imaginary parts – just as for the refraction index, we need its imaginary component to calculate the reflexion part). Consequently, any physical field f(x,t) on E3xR will leave place to a “superfield” F[y(x,t), y*(x,t)] on the wave space. Such a “superfield” (which has nothing to do with supersymmetry, by the way) is clearly a functional over E3xR. Physically, this represents the transition between “corpuscular” (or “point-like”) to “wavy”.

There’s no apparent objection in applying the principles of statistical physics to waves y(x,t). We just have to be careful of the dynamics involved: statistical physics was built for substantial media, made of corpuscles. Waves do not collide, they interfere. Precisely. So, instead of, say a gas, made of N corpuscles randomly colliding, we rather have a non substantial medium made of N waves randomly interfering. These waves do not need to be “wavefunctions” or “wavepackets”: as points x of 3-space are elementary, the waves y serving as coordinates in the wave space should rather be taken as elementary as possible, i.e. as monochromatic plane waves. Thus, any “function” F[y(x,t), y*(x,t)] of these basic waves will be able to give more complex waves, such as polychromatic ones, wavepackets (compact waves), etc, according to the shape of F.

A system made of N free corpuscles in ordinary 3-space had 3N degrees of freedom, a system made of N free waves in wave space will have 2N degrees of freedom in this space (but obviously 2N infinities in E3xR, indicating we’re now dealing with continuous and no more discrete objects).

We can even say more: we can say that (y,y*) is the location of some discrete object in wave space (the equivalent of the corpuscle in E3), while [y(x,t), y*(x,t)] represents a corresponding continuous object in E3xR.

Changing frame, leaving E3 for a frame better adapted to waves in E3, we have “discretized” waves without doing any special physical process. Each wave there can now be viewed as an isolated entity, whereas it was seen as a continuous process in E3.

 

Can we solve this way the measurement problem and the “collapse” of the “wavefunction”?

Let us rename our local coordinates in wave space (f,f*). A “wavefunction” or “probabilistic wavepacket” in E3xR can be built in wave space as some combination y[f(x,t), f*(x,t)]. If that combination is linear, we have a superposition of monochromatic plane waves. We can even build it as y[f(x,t), f*(x,t),x,t]: such last relations are local on E3xR. But let us first restrict to global ones. y[f(x,t), f*(x,t)] is the wavefunction of a system we don’t directly observe. As soon as we will, y will “degenerate”. What’s interesting with (f,f*) is that they always correspond to perfectly determined states with finite energy and momentum in E3xR. Should our measurement give us such a state, then y[f(x,t), f*(x,t)] should reduce to the corresponding “wavy coordinate” [f(x,t), f*(x,t)]. And this corresponds to y = d, the Dirac distribution. More precisely, we should have:

 

y[f(x,t), f*(x,t)] = d[f(x,t) - f0(x,t), f*(x,t) - f0*(x,t)]

 

where [f0(x,t), f0*(x,t)] is what we obtain.

We better see what may happen with local relations. Assume the result of the measurement occurs at t = 0. Then, at all t < 0, y[f(x,t), f*(x,t),x,t] is some physical state we don’t observe anyway. At t = 0, this physical state is reduced into d[f(x,0) - f0(x,0), f*(x,0) - f0*(x,0),x,0] = y[f(x,0), f*(x,0),x,0] and it remains like this until a new measurement is done.

Well, I don’t know if this is a possible explanation or even solution, but what I can see for the time being is that we no longer have any discontinuity in the measurement process: in Schrödinger’s (and al) interpretation (in E3xR), the discontinuity was on wavefunctions [y(x,t), y*(x,t)]. In wave space, we have no discontinuity on [f(x,t), f*(x,t)] at all and y[f(x,0), f*(x,0),x,0] has no reason to show discontinuities, unless it has some very special behaviour in E3xR. The transition is rather continuous. As f(x,t) is of the form aexp[±i(k.x - wt)] with constant amplitude a (the most basic waves!), f(x,0) is perfectly regular.

That’s what I see for the time being: still better than nothing…

 

As for interferences, I’ll check for the next time.

 

 

Minibluff the card game

Hotels