blogs.fr: Blog multimédia 100% facile et gratuit

doclabidouille

Blog multimédia 100% facile et gratuit

 

BLOGS

Blog dans la catégorie :
Sciences

 

Statistiques

 




Signaler un contenu illicite

 

doclabidouille

B118: WHEN TRYING TO APPLY WAVE INTERFERENCES TO BIOLOGY LEADS TO NEW SURPRISES...

Le 22/07/2015

I don’t know if the drawing (3) in the last bidouille is visible, it’s not on this computer. Yet, it has been correctly converted, as I could check last time. Should anyone has visualization pb, thanks to let me know, I’ll ask the webmaster for further info.

Searching for some « nice » properties of wavepackets that could show possible applications to biology, i did find interesting results… but i also raised one more rabbit…

Just can’t believe this…

Okay. Let’s review some general properties first.

According to one of the basic postulate of quantum mechanics (it’s still a postulate…), the corpuscular energy of a given physical system equals the energy of its corresponding wavepacket:

 

(1)               mc²/[1 – v²(t)/c²]1/2 = hf(t)

 

where m is the mass of the corpuscle at rest and f(t) the frequency of its wavepacket. This equality extends to the momentum so that:

 

(2)               mvi(t)/[1 – v²(t)/c²]1/2 = ħki(t)  ,  vi(t) = [c , v(t)]

 

It will be enough to consider (1). The frequency of the signal in the reference frame at rest (proper frame) of the particle is:

 

(3)               f0 = mc²/h

 

Unless m varies in time, f0 is a constant. Still more generally, if q(x,t) is the phase angle of the wavepacket y(x,t) = a(x,t)exp[iq(x,t)], then ki(x,t) = q(x,t)/xi and f(x,t) = 2pq(x,t)/t is its frequency at point x, time t. Using (1), it can always be associated with a velocity field v(x,t). Remember v depends only on time for “perfect” rigid bodies. Any real body remains subject to deformations, may they be small, so that different parts of the body may not move at exactly the same speed.

Take now two cellular wavepackets y1(x,t) = a1(x,t)exp[iq1(x,t)] and y2(x,t) = a2(x,t)exp[iq2(x,t)] associated with the biological cells. It has been shown in previous bidouilles that ther’s no reason why such wavepackets should not exist, or would vanish for some “suitable” reasons, as they result from constructive interferences of less complex wavepackets (namely, those of proteins). When the two cells interact, their wavepackets interfere. We’re interested in this interference, to see what may or may not happen. We so have a resulting wavepacket:

 

(4)               y = aexp(iq) = y1 + y2 = a1exp(iq1) + a2exp(iq2)

 

Expressing the amplitude a from the amplitudes of the two initial wavepackets and their phase shift is easy:

 

(5)               a² = a1² + a2² + 2a1a2cos(q1 - q2)

 

Important detail I forgot to mention: all amplitudes and frequencies are assumed non-negative, as we’re dealing with matter.

Expressing the resulting phase from the initial ones requires more mathematics. It reveals to be convenient to write this relation with both the half-sum and the half-difference of phases. We first use the property of the exponential function to put (4) under the form:

 

y = a1exp[i(q1 + q2)/2]exp[i(q1 - q2)/2] + a2exp[i(q1 + q2)/2]exp[-i(q1 + q2)/2]

= exp[i(q1 + q2)/2]{a1exp[i(q1 - q2)/2] + a2exp[-i(q1 + q2)/2]}

= {cos[(q1 + q2)/2] + isin[(q1 + q2)/2]}{(a1 + a2)cos[(q1 - q2)/2] + i(a1 – a2)sin[(q1 - q2)/2]}

= (a1 + a2)cos[(q1 + q2)/2]cos[(q1 - q2)/2]{1 – a12tan[(q1 + q2)/2]tan[(q1 - q2)/2]} +

+ i(a1 + a2)cos[(q1 + q2)/2]cos[(q1 - q2)/2]{tan[(q1 + q2)/2] + a12tan[(q1 - q2)/2]}

 

From what we deduce that:

 

(6)               tan(q) = [tan(+) + a12tan(-)]/[1 – a12tan(+)tan(-)]

 

where a12 = (a1 – a2)/(a1 + a2) and tan(+) and tan(-) are respectively short for tan[(q1 + q2)/2] and tan[(q1 - q2)/2].

The process is nothing new: it’s commonly used in modulation. Formula (6) resembles tan(a+b). It does reduce to the tangent of a sum of phase angles when a12 is unity.

In the general case, all phases and amplitudes depend on time and space variables. Never mind, we need find the expression for the resulting frequency f. So, we have no other choice but to derivate (6) with respect to time. Easy but quite painstaking. The result is:

 

(7)               f = ½ (f1 + f2) + ½ (f1 – f2)a12[1 + tan²(-)]/[1 + a12²tan²(-)] + a12’tan(-)/[1 + a12²tan²(-)]

 

with the prime standing for time derivation.

Right. We’ll be back to formula (7) soon, but first have a look at (5). As -1 £ cos(q1 - q2) £ +1, (a1 – a2££ (a1 + a2)². The resulting amplitude is therefore minimum each time the phase shift q1 - q2 is an odd integer multiple of p:

 

WHEN THE TWO CELLULAR WAVEPACKETS HAVE OPPOSITE PHASES, THE RESULTING WAVEPACKET IS THE SMALLEST POSSIBLE.

 

It can even be zero if a1 = a2! We have here a simple possible explanation of the wavepacket reduction at large scales, requiring no arguments on dissipation. Actually, this is somewhat similar to spin opposition in Fermi pairs: the resulting spin is zero. Here, the idea is the same.

On the opposite, a will be maximum each time q1 - q2 is an even integer multiple of p:

 

WHEN THE TWO CELLULAR WAVEPACKETS ARE IN PHASE, THE RESULTING WAVEPACKET IS THE HIGHEST POSSIBLE.

 

To reuse the spin analogy, it’s like if the two spins pointed in the same direction. So, it wouldn’t be Fermi anymore, but Bose. The square of the resulting amplitude is then even greater than the sum of the squares of the initial amplitudes.

In between, each time q1 - q2 will be an odd integer multiple of p/2, there will be no interference:

 

TWO WAVEPACKETS WITH PHASE SHIFT q1 - q2 = (2n+1)p/2, n Î Z, DO NOT INTERFERE. THE RESULTING AMPLITUDE IS JUST a² = a1² + a2².

 

It’s still greater than each of the initial amplitudes, but no greater than their sum. We do have an amplification, but not the best one.

We now turn to (7). When q1 - q2 is constant, the two wave 4-vectors are equal: k1i = k2i. So, the two frequencies are equal: f1 = f2 and the second contribution in (7) vanishes. If, simulatenously, a12 is constant, the third contribution vanishes as well and the resulting frequency reduces to the mean frequency fmoy = ½ (f1 + f2).

A surprise occurs as soon as q1 - q2 varies and a12 is not zero (i.e. a1 and a2 are different), even if the initial amplitudes are constant. Then (7) reduces to:

 

(8)               f = ½ (f1 + f2) + ½ (f1 – f2)a12[1 + tan²(-)]/[1 + a12²tan²(-)]

 

which is not positive definite! As a consequence, we can find f = 0 and even f < 0! f = 0 happens for:

 

(9)               (f1 + f2)/(f1 – f2) = -a12[1 + tan²(-)]/[1 + a12²tan²(-)]

 

Can we have this everywhere, anytime? Yes, since f1 and f2 varies. So:

 

WHEN (9) OCCURS, THE RESULTING FREQUENCY MAY BE GLOBALLY ZERO.

 

Yet, we’re dealing with matter, positive amplitudes and frequencies. The point is not really to find a zero frequency. The point lays in the fact that, if we go back to (1), it will send us back to a zero mass… Yet, absolutely no mass has been destroyed. We only find a wavepacket the frequency of which corresponds to no matter… or the quantum postulate is wrong.

Up to today, this is not the case: this de Broglie postulate has been verified.

Anyway, we are led to the same result (f zero or even negative) in the general situation, where all datas are variable. Some transformations show that f = 0 gives the following Ricatti equation for 1 + a12tan(-) = b12:

 

(10)           b12’ + fmoyb12² - f1b12 + f1 = 0

 

Even if this equation cannot be solved through quadratures, there’s no reason why it should only have trivial solutions. Besides, b12 = cte is verified iff it’s a root of fmoyb12² - f1b12 + f1 = 0 and only involves a12tan(-) = cte.

 

THE RESULTING FREQUENCY MAY BE ZERO OR EVEN NEGATIVE IN THE GENERAL CASE, WHERE AMPLITUDES AND PHASES VARY.

 

In the “less worse” situation, to what could correspond a resulting wavepacket with non zero amplitude, but zero frequency? It’s generated by matter, through matter interaction and no longer corresponds to any material pair!!! 8(((((

Notice this remains true for the wave 4-vector…

 

Well, the only explanation I found so far is:

 

WHAT WE GET IS A PHYSICAL ENTITY (A WAVEPACKET) THAT NO LONGER SENDS BACK TO ANY MATERIAL SUPPORT…

 

The thing becomes even worse when f < 0: then the resulting wavepacket would send back to an antimaterial support!!!

There’s no salvation from thermodynamics: should we take phases angles as the ratios of thermal energies and temperatures, the results would be exactly the same… we would only replace the mechanical frequency with the thermal one, that’s all.

 

Are we opening one more new door or is it just a particular property of two-cell interaction?

 

Let’s have a look at the n-cell interaction. The resulting amplitude satisfies:

 

(11)           a² = ån=1N an² + 2åån<p = 1N anapcos(qn - qp)

 

For the resulting phase angle, we have:

 

y = aexp(iq) = ån=1N yn = ån=1N anexp(iqn)

 

giving:

 

(12)           tan(q) = [ån=1N ansin(qn)]/[åp=1N apcos(qp)]

 

Thus:

 

D²[1 + tan²(q)]f = ån=1Nåp=1N {[anfncos(qn) + an’sin(qn)]apcos(qp) + [apfpsin(qp) – ap’cos(qp)]ansin(qn)}

= ån=1Nåp=1N {anap[fncos(qn)cos(qp) + fpsin(qn)sin(qp)] + (anap)’sin(qn)cos(qp)}

= ån=1Nåp=1N {anap[½ (fn + fp)cos(qn)cos(qp) + ½ (fn - fp)cos(qn)cos(qp) + ½ (fn + fp)sin(qn)sin(qp) - ½ (fn - fp)sin(qn)sin(qp)] + (anap)’sin(qn)cos(qp)}

= ån=1Nåp=1N {anap[½ (fn + fp)cos(qn - qp) + ½ (fn - fp)cos(qn + qp)] + (anap)’sin(qn)cos(qp)}

 

D = åp=1N apcos(qp)  ,  D² = ån=1Nåp=1N ancos(qn)apcos(qp)

 

D²[1 + tan²(q)] = D² + [ån=1N ansin(qn)]² = ån=1Nåp=1N anapcos(qn - qp)

 

Set f = 0. Even for all an constant, we still find:

 

(13)     ån=1Nåp=1N anap[½ (fn + fp)cos(qn - qp) + ½ (fn - fp)cos(qn + qp)] =

= ån=1Nan²fn + ån¹p=1N anap[½ (fn + fp)cos(qn - qp) + ½ (fn - fp)cos(qn + qp)] = 0

 

and we still have non trivial solutions because of the frequency shifts fn - fp.

So, it’s not a particular property of two cells.

And it’s still something different from an energy-free soliton, for we would need have a gyroscopic term in the Lagrangian density of the wavepacket, that would depend on it and couple to its space-time derivatives.

 

No. Here we have “something” we can clearly identify as a physical entity, generated by cell interaction, but “free of matter” (f = 0) or able to deliver a negative energy (f < 0). Now, it does exist, since its amplitude a is zero only when a1 = a2 and q1 - q2 = (2n+1)p, a very specific situation.

 

 

B117: A CONTROL COMMAND CHAIN FOR THE CNS

Le 17/07/2015

We’re at last back to neurobio, as i think i finally understood how the brain works. If what i’m going to talk about today is correct, then the central nervous system does work a completely different way than Turing machines and the two just cannot be compared.

My central pb for long was: “how the hell can neuron networks work, according to what biologists learn us?” There was nothing coherent.

Back one more time. Electrical synapses are easy to understand: the nervous signal can propagate two-day, down or back up, there’s 100% of chance it’s transmitted from one neuron to another one. Chemical synapses are the exact opposite: the nervous signal can only propagate one-way, it vanishes in the presynapse to the benefit of a scattering wave, neurotransmitters may or may not reach their receptors and anyway, cherry on the cake, might the incoming resultant be lower than the threshold, leaving the receiving neuron silencious, this last on can always self-activate opening its Ca2+ channels (oscillation mode).

Question: what the need for a transmission, then?

If you transmit a signal, it’s usually made to be received…

If the network is so made that, whatever the information carried through it, a given neuron can “decide” to remain silencious or activate, then there’s no need for a collective participation.

That’s the reason why I didn’t go further than the neuron function (B36). Besides, still according to biologists themselves, the cortex looks more like a “jungle of connections” than anything “structured”, in the sense of “dedicated”, which does not prevent specialized areas for as much.

Now, it was collectively agreed that chemical synapses were a significant evolution of nervous systems. So, there should be a much more thorough reason for that. Indeed, why would it be a “significant evolution” if the causal link was broken and if there was no dedicated circuitry at all at the global scale?

This is all absurd if we try to reason in terms of time sequences or even recursivity and it confirms that the brain does definitely not work as a Turing machine, i.e.:

 

(1)               {set of given initial datas} -> [convenient program] -> {set of results}

 

Right on the contrary, it works this way (as shown by bioelectrical datas):

 

(2)               {set of results or goals to reach or satisfy} -> [proper graphs] <- {set of needs}

 

In (1), results are unknown or programs would be useless. In (2), needs and goals are known in advance and the system has to find paths in its neuron graph to satisfy these goals. For a given goal to reach, there is not a unique graph: many possibilities exist, that lead to the same result.

Apart from mathematical calculus (the results of which are not known in advance), all more “concrete” goals are based on the principle according to which the system must remain homeostatic. So, it’s all about control-command, a discipline i was not so bad at.

Let’s take an example: thirst.

The need is rehydrating, as thirst comes from a lack of water.

The goal is to reach back a normal hydrating: that’s the result. Our system must remain homeostatic (i.e. regulated) with respect to water.

To satisfy this need and reach this goal, we can draw a control-command chain:

 

(3)                

 

 

 

 

  

 

(N = need, E = error, G = graph, C = command, P = process, R = result)

 

The loop is generally not linear. G and P are operators. We have:

 

(4)               E = N – R

(5)               C = G.E

(6)               R = P.C

 

As for any other control-command system, we must find C = 0 as soon as E = 0, that is, when the goal is reached. The graph operator G acts as long as E > 0, i.e. the need remains greater than the result. As said above, G and P aren’t linear in general. The feedback on R guarantees regulation, so that the whole chain is homeostatic: when the system feels a biological need, the neuron graph is activated to control-command the corresponding process until the feeling vanishes. I’m thirsty (E = N), I drink (succession of orders and motions, E = N – R, R increases, E decreases) until I have enough (E = 0).

In computer science algorithmics, programming is based on two fundamental principles that have to be satisfied: proof of correctness (proof that the program does compute the required function) and proof of end (or stop, proof that the program does not loop).

Here, these two principles remain (whatever the paths in the neuron graph, the graph operator must properly command the process so that the goal is reached within a finite time). In addition, we have the general homeostatic condition over the system (the system must be regulated so that a decrease in something is compensated for by a bring from the surrounding environment or a saturation in something is reduced by a decrease in the corresponding substances).

There are different yet equivalent ways of solving (4-6), depending on which quantity we’re interested in. We can, for instance, solve it for the error E. Then, we get:

 

(7)               E + P.(G.E) = (Id + P.G).E = N

 

A priori, there’s absolutely no reason why the graph and the process ops should commute. If we solve for the command C, we get:

 

(8)               (Id + G.P).C = G.N

 

And if we solve for the result R:

 

(9)               (Id + P.G).R = P.G.N

 

If the operator on the left is reversible, we can express R as a function of N:

 

(10)           R = (Id + P.G)-1.(P.G).N

 

and the final condition R = N can only be reached in finite time iff (if and only if) P.G vanishes. Since, in practice, G induces P (motor system), this normally reduces to G = 0 (the solicited graph turns silencious) at time tend when the chain starts at time t = 0.

 

We don’t care anymore about each neuron composing the graph to transmit for sure the information E coming out of the comparator, all we ask for is this information to be transmitted, one path or another, and produce the proper command on the process.

X-net was conceived for information to reach the receptor for sure whatever the road used.

Somewhat similarly here, paths are chosen so that the information in gives a command out and the whole regulation chain works properly. Remember these paths are not static, but dynamical: should the info was “stuck” somewhere, another path would be taken from that point. As a result, individual transmission abilities become secondary. What becomes relevant is the collective ability to transmit information throughout the whole graph. Errors may occur, they are even a characteristic feature of animals brains, they don’t make a problem considering this: 1) any error occurring during the regulation process appears back into the difference E and is to be corrected at the next passing through G and 2) neuron networks integrate “code correction” in this ability of individual neurons to self-activate. So, if the incriminated neuron remains silencious while it should be active, it can always self-correct opening its calcium channels (unless it’s damaged, of course).

Suppose now such a damage occurs between two processes. Then, plasticity applies. It’s an amazing property of evolved systems that they are able, whether to use other paths to reach the same goal (self-adaptation) or to make new connexions between sane cells that will serve as many new paths.

This is simply unreachable with inert materials entering prefixed programs.

Only when the damage is too large (involves too many neurons) or localized but touching critical areas will the brain be unable to satisfy the corresponding needs.

 

When we take for G the whole neocortex, the induced process P is called “consciousness”. We have a (large) set of needs in and a (large) set of goals-to-reach out.

Comas correspond to G = 0 at different scales. The larger the scale, the deeper the coma.

Cerebral death corresponds to G = 0 globally. Eqs (7-9) then give E = N (no need can be managed by the system on its own), C = 0 (control-command off), R = 0 (no reaction of the system). Let t = 0 be the instant where this situation occurs. We then have G(0) = 0.

Apart from artificial coma, all we can do for the time being is to plug-in artificial substitutes to the natural command C to maintain vital processes. We can act upon P, but not G. So, I have to shunt over G with an artificial regulating system, say A and G becomes G’ = G + A in the above equations. When the system works on its own, A = 0 and G’ = G. When it does not anymore, G = 0 and G’ = A.

The question becomes really interesting when, after a delay T, the system restarts on its own, while nobody expected it. Quite funny…

Because the loop (3) is rather general. When there’s no serious damages, G(0) = 0 looks like a “secure mode”. This can be understood as the brain, with the heart, are the two most vital organs in the body so that the CNS will preserve itself the best it can against dysfunctions.

 

What will make it go out of this secure mode, if it can no longer make any decision on its own, since all its graphs are off?...

 

The only way out is some external reactivation. Not necessarily external to the being, but at least external to its biological body. Whether another Being or the same being in another state.

I see no other solution.

 

 

 

 

 

B116: CLASSICAL MOTIONS SPENDING NO ENERGY AT ALL

Le 12/07/2015

No worries: i rewrite this article for i found it much too complicated and badly structured. We will, of course, talk about solitons again, since they show the same features as the PSI bodies we’re working on: compact wavepackets. Before that, I’d like to talk about something I discovered when reviewing my refs on mathematics about singular solutions of differential equations. I will restrict myself to 4D classical motion, as field models follow the same principle, as we’ve been showing all along these recent bidouilles.

Let us first recall the mathematical problem. Let x be a variable, y = f(x) a function of x and F(x,y,y’) = 0 a first-order differential equation, with y’ = dy/dx. We say a solution ys = fs(x) I a singular integral of F = 0 if ys satisfies both F = 0 and F/y’ = 0 at y’ = ys’. Such a solution obviously satisfies the differential equation from the start, but it cannot be obtained fixing any value of the integration constant, as it is the case for any other regular solution of F = 0. To find ys, we need eliminate ys’ from the couple of equations F = 0, F/y’ = 0 than check which solutions obtained this way are consistent with F = 0 or not. Only the first ones will be considered as singular integrals.

There’s a straightforward and very general application of this to classical mechanics. Let t be the time variable, x(t) a function of time and v(t) = dx(t)/dt its velocity. We are interested in finding particular motions of an incident non-deformable solid body of mass at rest m under the influence of an external field that can either be a free wave or produced by a non-deformable solid source with mass at rest m’, distinct from the incident body. These particular motions must be those for which:

 

(1)               L[x(t),v(t),t] = -m(t)c²[1 – v²(t)/c²]1/2 + m(t){G[x(t),t].v(t) - f[x(t),t]} = 0

 

and, simultaneously:

 

(2)               L/v(t) = m(t)v(t)/[1 – v²(t)/c²]1/2 + m(t)G[x(t),t] = 0

 

that is, motions with constant action all the time and zero generalized momentum. These motions become particularly interesting to search for since Hamilton’s formalism immediately implies that:

 

(3)               H = v(t).L/v(t) – L = m(t)c²/[1 – v²(t)/c²]1/2 + m(t)f[x(t),t]

 

everywhere along these trajectories. If such motions exist, this means they spend no energy at all. Physical logics would suppose such trajectories cannot exist, or only lead to fixed points, or are out of physical domains. We’re going to see this is not at all the case.

From (2) and (3), we get:

 

(4)               G[x(t),t] = -v(t)/[1 – v²(t)/c²]1/2

(5)               f[x(t),t] = -c²/[1 – v²(t)/c²]1/2

 

We have to solve for v(t), since what we are looking for is the motion of the incident body. x(t) is its position at time t in Euclidian 3-space, it also the point where the components of the external field are observed, at this time. Doing this, we assume that the observer stands on the incident body or is this body itself. Dividing (4) by (5) directly gives:

 

(6)               v(t) = dx(t)/dt = c²G[x(t),t]/f[x(t),t]

 

whereas solving for v(t) in (5) gives

 

(7)               v(t) = cn{1 - c4/f²[x(t),t]}1/2

 

and reporting this result in (4)

 

G[x(t),t] = n{1 - c4/f²[x(t),t]}1/2f[x(t),t]/c

G²[x(t),t] = {1 - c4/f²[x(t),t]}f²[x(t),t]/c² = f²[x(t),t]/c² - c²

 

or, in 4D notations:

 

(8)               GiGi = c²

 

Surprisingly enough, we fall back onto an « old » result, B89: the very same condition as for the motion under an event horizon… (8) is actually the maximal condition for a G-field or any velocity field according to classical space-time relativity.

We quickly verify that the Lagrange equations of motion (d/dt)L/v(t) = L/x(t) brings nothing more, as they are identically satisfied. Our “singular” equations of motion are ntirely contained in the system (6)-(8). In (6), the G-potentials are known, since they are solutions of a Maxwellian system of field equations. If there is a source, v’(t) is its velocity: there’s no reason why it should coincide with v(t). Anyway, knowing v’(t) and the source distribution m’(x,t), we deduce G(x,t) and f(x,t) at any observation point x in E3 distinct from the origin (the cog of the source). Then, we make x and x(t) coincide and we solve for (6). If the source is point-like, accessible xs are everywhere outside the source. If the source is dusty, the incident body can be found, either outside or inside the source “cloud”, centre excepted. Inside the cloud, the field equations must be solved with a non-zero right-hand side term.

 

THERE EXISTS TOTALLY ENERGY-FREE 4D CLASSICAL MOTIONS WITH CONSTANT ACTION, NAMELY ALL SOLUTIONS OF (6) AND (8).

THESE MOTIONS ARE SINGULAR INTEGRALS OF THE LAGRANGE EQUATIONS OF MOTION, SO THAT CAUCHY’S UNICITY THEOREM NO LONGER APPLIES. IN OTHER WORDS, NONE OF THESE SOLUTIONS CAN BE UNIQUELY DEFINED.

 

Not only are these solutions everything but trivial, but they even make a large class.

Understand: against all odds, they have nothing exceptional…

Nowhere in my literature on theoretical mechanics did I ever heard of such possibilities. I suppose it’s mainly due to that conception with have that mechanical systems spending no energy at all are simply “metaphysical”. Well, this is apparently not the case and we’re now quite accustomed to discovering “new behaviours” that contradicts our conceptions or rationality.

Amongst all physically possible solutions of (6)-(8), one can find solitons. These are amplitude-bounded oscillatory motions:

 

(9)               x(t) = a(t)exp[ib(t)]  ,  a(t) = 0  for  t ³ tf

 

for some instant tf [if tf in infinite, then the asymptotic condition is a(¥) = 0]. Again, we’re more used to find boundaries in space. But, the variable here being time, compact waves or wavepackets have to be bounded in time (or time-limited). So what, after all?

Inserting (9) into (6) yields to:

 

(10)           [a’(t)  + ia(t)b’(t)]exp[ib(t)] = c²G[a(t),b(t),t]/f[a(t),b(t),t]

 

where the prime is for the time derivative. It should be clear that this is only possible if the term on the right is itself an oscillating function:

 

(11)           G[a(t),b(t),t]/f[a(t),b(t),t] = V[a(t),b(t),t]exp{iQ[a(t),b(t),t]}

 

Hence the system of coupled equations:

 

(12)           a’(t)cosb(t) - a(t)b’(t)sinb(t) = V[a(t),b(t),t]cosQ[a(t),b(t),t]

(13)           a’(t)sinb(t) + a(t)b’(t)cosb(t) = V[a(t),b(t),t]sinQ[a(t),b(t),t]

 

to what we can add:

 

(14)           [a’(t)]²  + a²(t)[b’(t)]² = V²[a(t),b(t),t]

 

According to our starting hypothesis, at t ³ tf we must have x(t) = 0, that is, we must be at the origin. There, according to the Maxwell model, both G and f are awaited to be infinite. However, their ratio is not necessarily. A simple reasoning easily shows that we can rather expect to have:

 

(15)           V[0,b(t),t] = 0  for  t ³ tf

 

Indeed, if we reach the origin at t = tf and if we’re supposed to stay there, then both a(t) and a’(t) must vanish for t ³ tf, while we expect b’(t) (a frequency) to remain regular. If (15) holds, it shows that G and f can well diverge at the origin, their ratio remains regular there, equal to zero, only indicating f diverges quicker than G, a result consistent with (8).

Free G-waves are typical of the situation. They can easily give birth to energy-free, amplitude-bounded oscillatory motions of incident bodies under their influence. The same happens for EM-fields and free waves, as G/fG = A/fEM.

 

Similar procedures can be done with 4-parameters fields. However, we do need some gyroscopic contribution [second term in (1)], for if this term is absent then, as we can already see in (6), the only solution is v(t) = 0 and f = -c², leading to a fixed position: nothing interesting…

In particular, the Maxwell model leads to no non-trivial such solutions:

 

£G = (c²/8pk)Wij(x)Wij(x) – pi(x)Gi(x) = 0

£G/Wij(x) = (c²/4pk)Wij(x) = 0

 

implies Wij(x) = 0 which in turn implies pi(x)Gi(x) = 0. Even the extended Maxwell:

 

£G = -(c²/8pk)fpl²[1 – W²(x)/fpl²]1/2F[G(x),x] = 0

£G/Wij(x) = (c²/4pk)Wij(x)/[1 – W²(x)/fpl²]1/2  = 0

 

implies Wij(x) = 0 which in turn implies F[G(x),x] = 0. To find non trivial solutions, we need a contribution of the form Yij[G(x),x]Wij(x), with Yij skew-symmetric. Such contributions actually appear in semi-classical Yang-Mills models, assuming we keep for Wij(x) the linear part of the YM fields. The complete field being W’ij(x) = Wij(x) – i(m/ħ)[Ai(x),Aj(x)] (matrix Lie bracket), the kinematical term in W’²(x) gives W²(x), -2i(m/ħ)[Ai(x),Aj(x)]Wij(x) and a quartic term -(m/ħ)²[Ai(x),Aj(x)][Ai(x),Aj(x)].

 

(in components: W’ija = Wija - fabcAibAjc => W’ijaW’ija = WijaWija – 2fabcAibAjcWija + fabcfdefAibAjcAidAje, where the fabc = -facb are the structure constants of the group)

 

This works for instance for the unified SU(3,1) gauge group (1575 structure constants). But it’s all light-years away from our practical considerations… J

 

 

B115: GENERATING AN ELECTRIC CHARGE FROM A MASS

Le 28/06/2015

There’s however a possible way to generate an electric source from a mass one. That’s what we should look for if we start from the principle that all known fundamental interactions derivate from an original one, of a gravitational kind.

The starting point is to notice that e0 and k are universal coefficients defined in the vacuum: e0 is the electric permittivity (conductivity) of the vacuum, k is the same for gravity. So, let us rename it k0 for convenience. In matter, these coefficients become e and k. Let:

 

(1)               K0 = (4pe0k0)1/2  ,  K = (4pek)1/2 

 

When K = K0, we’re in the vacuum, i.e. outside matter. So, if matter distribution is to be enlarged through the ratio K/K0, it should be zero in the vacuum, i.e. for K = K0. A translation K -> K – K0 brings it back to K = 0. Introduce the more general mass distribution:

 

(2)               m(x,K/K0 - 1) = m0(x)f(K/K0 - 1)

 

We should thus have f(0) = 0. The function k is unit-free. Around K = K0, we can perform a perturbative development of (2) in powers of (K/K0 – 1):

 

(3)               f(K/K0 – 1) = ån=0¥ fn(0)(K/K0 – 1)n/n!

 

Assuming, for the time being, that f is smooth. Separate the even powers from the odd ones:

 

(4)               f(K/K0 – 1) = ån=0¥ f2n(0)(K/K0 – 1)2n/(2n)! + ån=0¥ f2n+1(0)(K/K0 – 1)2n+1/(2n+1)!

 

We have:

 

(5)               ½ [f(K/K0 – 1) + f(1 – K/K0)] = ån=0¥ f2n(0)(K/K0 – 1)2n/(2n)!

(6)               ½ [f(K/K0 – 1) – f(1 – K/K0)] = ån=0¥ f2n+1(0)(K/K0 – 1)2n+1/(2n+1)!

 

We assume that (5) leads to gravity and (6) to electromagnetism. When f has parity +1, (6) is zero, representing electrically neutral massive bodies. When f has parity -1, (5) is zero, representing massless electrically charged bodies. The equivalence is given through:

 

(7)               mG(x,K/K0 - 1) = ½ m0(x)[f(K/K0 – 1) + f(1 – K/K0)]

(8)               mEM(x,K/K0 - 1) = ½ m0(x)[f(K/K0 – 1) – f(1 – K/K0)] = -r(x,K/K0 – 1)/K0

 

Where r is the charge distribution (in matter, of course). The rest follows: mG(x,K/K0 - 1) generates a G-field in M; r(x,K/K0 – 1), an EM-field.

The idea under this is to consider that mass and gravity appear as the symmetric contribution of a non-symmetric matter distribution (2), while charge and electromagnetism appear as the skew-symmetric part of it. Thus, at the beginning, we have a non-symmetric G-field giving a symmetric one we call “gravitation” and a skew-symmetric one we call “electromagnetism”. In the vacuum, of course, both mG and mEM are zero, since m is zero.

 

This enables us to stay in dimension 4, use functionals on G-fields, while equally treating mass and charge distributions.

 

We could try to do the same with the two other nuclear interactions, in a semi-classical non  Abelian model, using SU(3,1) for instance, but my goal, our goal, is not the very early Universe, it’s neurophysics and parapsychology. So, I’ll “reduced” myself to electrogravity.

Its application to neurophysics becomes clearer when we remind that the neuron cell is electrically charged: at rest, its membrane is under voltage. There’s a non-equilibrium situation on ionized ions from one side of the membrane to the other. This is why using PSI models involving both G- and EM-fields is more adequate than our previous models involving G-fields only.

Our first attempt was to introduce a two-state mass distribution, source of a two-state G-field.

Our second attempt is a single mass distribution depending on an additional parameter (K), source of a single G-field from which electric charges and EM-fields can arise.

I do not pretend at all this last model is the one happening in Nature, only it’s simpler than the first one. It also reveals a close link between symmetry properties and sources.

Whatever it is “in the real life”, we can give a new and wider definition of a PSI-field, saying:

 

WE WILL CALL “PSI” A PHYSICAL FIELD OVER MINKOWSKI SPACE-TIME M WHICH MATHEMATICALLY IS A FUNCTIONAL Y[G(x),A(x),x]. THE COEFFICIENTS OF THIS FUNCTIONAL, IN A DEVELOPMENT IN POWERS OF THE Gi(x)s AND THE Ai(x) ARE x-DEPENDENT EXPRESSIONS GIVEN BY QUANTUM FIELD THEORY. AS A CONSEQUENCE, WE COULD EVEN BE MORE PRECISE, SAYING IT’S A FUNCTIONAL Y[y(x),y*(x),G(x),A(x),x] OVER M, ALSO INVOLVING WAVEPACKETS.

 

As particular examples of such fields, we find non linear couplings between gravity and electromagnetism, non linear gravitational self-couplings and non linear electromagnetic self-couplings. Other couplings involve self-couplings of the wavepacket and couplings between the wavepacket and the two interaction fields.

All known quantum field theory is therefore contained in a single PSI field expression. Take the SU(3,1) isospace-time, you’ll get all couplings of the Standard Model so far.

 

All this deep theoretical work seems very far away from our concerns but serves to build, justify and reinforce arguments in favour of the existence of the PSI. What we are doing, from articles to articles, is to build the most consistent physical theory of the PSI possible. To achieve this, we need connect it to the existing. That’s why, how painstaking it can be, we have no other choice, if we want to treat the subject seriously, than to talk of relativistic quantum field theory.

The living, i.e. thermodynamically active, does not reveal the PSI, or it would have been done for long. That’s why we have to turn to quantum theory, see what we can or cannot get out of it.

The alive neuron “only” give birth to mental processes: this is purely biological.

What is likely to come “next” involves the dead neuron, where biology has nothing to learn anymore, except the dismantling of the biological cell.

On the contrary, the wavepacket, once formed, does not dismantle. It can be altered, even destroyed, by negative interferences, but it does not dismantle.

The field Gi(x) is useful to describe the body. The field Ai(x), to describe mind. An animal is an autonomous system made of a body, a mind and a coupling between them. If we want to be as complete as possible, even if we only schematize, we have to take Gi(x), Ai(x) and their couplings, as well as their sources and the couplings between them, into account in a description of the PSI.

RQFT shows that “dead” matter continues producing interaction fields. The most current model is that of condensates: quantum condensates replace “active” fields in “afterlife” processes. They aren’t the only one: any wavepacket acts as a source of interacting fields. It’s systematic when it’s fermionic. When it’s bosonic, the condition is to be complex-valued, i.e. having a variable phase. “true neutral” wavepackets produce no source, but in the existing catalogue of elementary particles, we know no such particles that would be their own antiparticles, while not being the vectors of a fundamental interaction.

 

 

 

B114: MISCELLANOUS ON UNIFICATION

Le 24/06/2015

Not a lot to add today, rather miscellaneous.

I first searched for something interesting in a non-linear extension of the electroG model, such as those proposed by our PSI program, I didn’t find any possible way to generate both an electric current and an electromagnetic potential from possible couplings of weighting matter and its G-potential.

The “electromagnetic mass” cannot b confused with the weighting one, because it behaves as the electric charge: whereas two weighting masses with the same sign attract each other, two EM masses with the same sign repulse each other. There’s a mere equivalence between the electric charge and the EM mass, that does not change the physical behaviour of the sources for as much. For this reason, it would be hard to generate an electric charge from a weighting mass, even trying to extend the definition of this last one. The only case of the electron speaks enough: we’ve seen how different are the values of its weighting mass at rest and of its EM mass. Things don’t work like this.

When talking about unification, the sources involved should be able to transform into one another through a symmetry group. Here, an electric charge should transform into a weighting mass and conversely. Or should it? What actually gives a transformation group are linear combinations of already existing quantities. In our present purpose, a combination like am+bq should give a mass m’, while cm+dq should give an electric charge q’. However, this is not at all what quantum theory shows.

In the “isospace” theory, we start with a given particle that can be found under a certain number of “states” or “configurations”, may we assume some restrictions. In the first SU(2) model of the strong interactions, for instance, the particle was the “nucleon”, which could be found under two “states”, the neutron |n0>, electrically neutral, and the proton |p+>, positive. That these two states could refer to a single particle expressed the (electric) charge independence of the strong interactions. The restriction was then: “assuming we neglect electromagnetic interactions, much weaker, the strong nuclear interaction makes no difference between a neutron and a proton”. The number 2 in the symmetry group refers to the number of such states making a single particle, i.e. configurations for which the strong interaction acts just the same. The choice of the Special Unitary group SU is due to the particle wavefunction, which is obviously complex-valued (SU is a rotation group in a complex plane).

So, two states, a 2-state wavefunction y1,2(x) with complex conjugate y*1,2(x): as many components as a spin-1/2. Hence the name of “isospin ½”, meaning “two states of charge” in place of “two states of intrinsic angular momentum”.

When working on mathematical groups, people are interested in discovering their properties and, in particular, their real “dimension”, i.e. the number of rotations with real-valued angles. It so appears that special unitary groups SU(n) have n²-1 such angles: we say their dimension (or number of generators) is n²-1. As a consequence, SU(2) possesses 3 rotation angles. The transformation matrix applies on y1,2(x) to transform it into another wavefunction y1,2(x), obviously of the same kind. It does not act upon “strong charges”. What happens is that, amongst the 3 available generators, which are 2x2 matrices (actually the same as the Pauli matrices), one of them (usually called I3) is diagonal and gives the charge (or the charge operator). The two others transform one of the state into the other one, that is, the proton wavefunction into the neutron one and conversely.

The very same occurs for the larger symmetry group SU(3) of the new strong interaction theory. There, we have 3 fundamental states, u, d and s, representing three “quarks”. Again, we shouldn’t see these particles as different, but as different configurations of a single particle. SU(3) has 8 generators. It includes SU(2). The isospin is 1 (3 states). Each generator can be represented by a 3x3 matrix. Two of them appear diagonal, F3 (containing I3) and F8. F8 gives what is called the “hypercharge”. The charge Q, properly speaking, becomes a linear combination of F3 and F8 (with constant coefficients). It again applies on the wavefunction y1,2,3(x) of the 3-state model. Two such symmetry groups have been built, one for quarks (“flavour dynamics”) and one for gluons (“color dynamics”), “gluing” quarks. However mathematically the same, these two groups shouldn’t be physically confused, as they refer to different properties inside the same frame: SUq(3) relates to “quark flavour” (fermions), SUc(3) to “gluon color” (bosons). Notice the isospin is the same for both. Another essential difference with the spin (½ for quarks, 1 for gluons).

There’s a symmetry group that have been interesting me for long, it’s SU(3,1). It’s larger than SU(3) and, as rotation groups in real space-times, it’s no longer Euclidian. It has (3+1)²-1 = 15 generators. But, 8 of them are space-like, 1 is time-like and the 6 remaining ones are space-time-like. The mathematical decomposition of SU(3,1) is as follows:

 

(1)               SU(3,1) » SUS(3) x SUST(2) x SUST(2) x UT(1)

 

(» = isomorphism – equivalence, if you prefer). 15 = 8 + 3 + 3 + 1. It’s extremely tempting to relate it to the symmetry groups of the four known interactions. How? SUS(3) can be identified with the color group SUc(3), no difficulty. UT(1) could be associated with gravity. If we make these choices, then the symmetry will represent a 4-state particle with one time-like state and three space-like ones. The time-like state is to be associated with the mass operator M; the space-like states, with the charge operators. SUS(3) has two diagonal matrices = 2 charge ops; SUST(2), 1 diagonal matrix = 1 charge op, doubled. The 4-state wavefunction is y1,2,3,4(x). y4(x) is the mass state. So, SUST(2) x SUST(2) = [SUST(2)]x2 should be devoted to the unified electroweak interaction field. The present symmetry group is that given by the GSW model, SUW(2)xUEM(1): 3+1 = 4 generators only. We have 2 generators more. So, whereas they should be devoted to the weak nuclear field, or they should be part of a quantum extension of electrodynamics (extended QED). It’s not easy to decide, as [SUST(2)]x2 is both space and time-like in isospace-time and so, mixes charge and mass. This would imply the photon to become massive, as the weak bosons. Now, this goes reverse to the goal usually targeted: to obtain massless gauge bosons inside a wider symmetry…

Yeah, except that… we now have a mass operator… J and transformations of mass into charges and backward. This should allow massive gauge bosons, while massless ones could still be found in the single non-Euclidian SU(3,1).

Let me explain better.

Relation (1) is a mathematical equivalence: the group on the left has same number of generators (same dimension) as the Euclidian product of groups on the right.

Physically now, it rather describes a transition:

 

(2)               SU(3,1) -> SUC(3) x SUST(2) x SUST(2) x UG(1)

 

Each of the groups on the right are sub-groups of SU(3,1). As a result, the initial symmetry is reduced. Geometrically, we go from a space-time structure to a product of torus: SUS(3) is an 8D-sphere, SUST(2) a 3D-sphere and UT(1) a 1D-sphere (circle). The whole product gives a 4-frequency torus S8xS3xS3xS1, where S is the topological Riemann sphere. The Standard Model foresees SUC(3) x SUW(2) x UEM(1) x UG(1), assuming spin-1 gravity. So, there should happen a second transition involving the electroweak field alone:

 

(3)               SUST(2) x SUST(2) -> SUW(2) x UEM(1) x G

 

where G is a two-generator group. Let’s reason in terms of isospins: SU(2) -> 2 states -> isospin ½ ; U(1) -> 1 state, isospin 0. On the left, we have a pair of two isospins ½. This should give an isospin 1. For the isospin to be conserved in (3), we need G to describe an isospin ½. On the other side, we cannot have more generators (more symmetry) than we had before. So, 3+3 = 6 should transform into 3 + 1 + 2 and dimR(G) should be 2. The only possibility is G = Spin(1), the Clifford group of (real) dimension 2. It is true that, group-theoretically, there is a close relationship between SO(3), the rotation group of E3, SU(2) and Spin(1). The point here is that we shouldn’t forget we’re not in ordinary space or space-time, but in isospace-time. Now, Spin(2s) are the spin groups of ordinary space-time… Here, we’re dealing with rotations in isospace(-time). We can recover such an equivalence, if we rename our Clifford groups Isospin(2s), keeping the same structure. Our second transition will then take the form:

 

(4)               SUST(2) x SUST(2) -> SUW(2) x UEM(1) x Isospin(1)

 

We aren’t safe for as much… Because introducing Clifford structures in isospace undermeans introducing Fermi-Dirac statistics… “isofermions”… Can we do this?

I think we can. Because we already have an example of such “isofermions” in the color wavefunction of QCD. Color dynamics is safe if and only if it includes Pauli’s exclusion principle in its isospace. Why couldn’t we find the same with the weak field? And would this have any influence on chirality violation? I can’t say. What I can tell is that SUW(2) enlarged into SUW(2)xIsospin(1) now has 5 generators instead of 3 + a skew-symmetry on the corresponding wavefunction (leptonic charges).

I finally remarked something, that the ratio qe/mpl of the electric charge qe = 1,60219x10-19 C by the Planck mass mpl = 5,456019873x10-8 kg gives:

 

(5)               qe/mpl very close to (4pe0k/861)1/2

 

and 861 is about the ratio between the strong and the electromagnetic interaction (£ 1000).

This gives a permittivity coefficient e = e0/861 for the strong field, which would then be 861 times less conducting than EM. It’s consistent with the orders of, whether energy thresholds (Mev -> Gev) or, equivalently, the ranges (10-15 m = 1F -> 10-3 F).

To conclude all this, given mass and the three colors r,v,b, mixed in a pseudo-Euclidian hermitian isospace-time, we could get the charge symmetries for the four known fundamental interactions through at least two transitions.

 

What’s the connection with the PSI ?

 

Well, none if you “restrict” yourself to pure high-energy physics and, if you want to see a connection with neurophysics and the PSI, however you try to take the problem of fundamental forces in the Universe, as far as we understand things (or I myself), there seems to exist at least two different forms of charges from the very beginning: mass and a certain type of charge. They don’t react the same: mass with same sign attract, mass of opposite sign repulse each other, whereas all other known charges behaves like the electric ones, they attract when they have opposite signs and repulse when they have same signs. Space-time confinement is something different. It’s about the behaviour of the force field in space and time, not about the properties of its quanta. This is what expresses SU(3,1): we can transform the mass state y4 into a charge y1,2 or 3, or any charge state into a y4, yet the mass operator remains distinct from the charge ones. They don’t lay on the same kind of axis. In general we will have combinations yi(x) = Tijyj(x), mixing the 3 charges and the mass.

Just like for space and time in SO(3,1).

As a result, the gravitational potentials Gi(x), despite the most universal of all, appear not sufficient to describe a massive and electrically charge event. We would need introduce “new” gravitational-like coordinates (4pe0k)1/2Ai(x), that wouldn’t behave like a G-field anyway, but still as an EM field. It would give us 4 dimensions more… that would be 12… + the two other nuclear interactions…

Unreal.

But we can go back to dimension 4, while introducing restrictions on the tangent space-times. It will amount to exactly the same, less the question of the number of dimensions. Those restrictions are, up to now: v(t) £ c and Wij(x)Wij(x) £ fpl², to what we can add Fij(x)Fij(x) £ 4pe0kfpl² » 3,8 – 4 x 1065 T², that is, |Fij(x)| £ 6 x 1032 T roughly: far enough for our needs… J

Nothing to complain about, then, but all these “physical constraints” actually open us doors to brand new physical phenomena, where the PSI can find its place. If not as a function over 8D space-time, at least as a functional (i.e. an operator) over M4-.

So, we will simplify life rewriting M4 and even M Minkowski space-time, for I think I won’t have to introduce new frames.

However, I would have never discovered such restrictions on the gauge fields if I hadn’t try the “8D experiment”. So, everything but a waste of time, it was.

 

 

Minibluff the card game

Hotels