blogs.fr: Blog multimédia 100% facile et gratuit

doclabidouille

Blog multimédia 100% facile et gratuit

 

BLOGS

Blog dans la catégorie :
Sciences

 

Statistiques

 




Signaler un contenu illicite

 

doclabidouille

B119: SENT BACK TO (BAD) OLD QUANTUM MEASUREMENT PROBLEM...

Le 25/07/2015

We now have what can be called without exageration a HUGE worry, the kind that cannot be solved in 48h of time, as we are touching the most delicate (and still controversed) point of the foundations of quantum mechanics: the definition of the wavefunction itself, through the observability problem.

I read Schrödinger and Feynman once again. It’s an excellent exercise to go back to sources when you’re stuck. The fundamental property of measurement at microscopic scales is rather easy to enounce: if you voluntary restrict yourself to the mere observation of the final impacts of particles on a screen, what you get there is an interference curve, with fringes; but, as soon as you want to make your understanding of the process finer, trying to determine the paths the particles have taken from their emitting source to the screen, the fringes disappear.

In other words, as long as you don’t observe particles, only the final result, these particles behave like waves; as soon as you observe their motion, they behave like corpuscles.

Most of theoreticians deduced from those Young’s experiments that observation or, what amounts to the same, the presence of the observer, suffices to destroy interferences, deeply modifying the behaviour of particles. Or, which is equivalent, that the observer did himself interfere with the system in such a way that he destroys the initial interferences. In what way? Nobody knows. We only talk of a “collapse” of the wavefunction, but we’re still unable to give a consistent mechanism behind this.

Many theoreticians of Schrödinger’s time did not share his opinion about the wavefunction being able to potentially represent any physical object in Nature, whatever their size. Some, like Bohr, Heisenberg or Born, despite amongst the founders of quantum theory, were convinced it was nothing else but a convenient mathematical (i.e. abstract) tool to calculate probabilities, without any deeper physical content. No “physical reality”. For these people, and many others after them, measurement was the only meaningful process and the values obtained in final results, the only “touchable” reality.

As for myself, I’m not fully convinced observation is the true problem: in all cases, we observe the impacts on the screen. So, we can as well observe the interference fringes (or we wouldn’t talk about them) or the “smoothed classical” curve. The “sudden reduction” arises when we add a complementary observation “inside the box”. When we try to know which path a given particle of the beam could well take to reach the screen.

Put differently, as long as we stick to the final result, we don’t modify the essential nature of the object we’re experimenting in any way; if we are more curious, things immediately reduce and we loose all the informations about the physical reality of this object.

“Classical” physics asserted that the physical reality of substantial objects was strictly corpuscular: any substance was made of “corpuscles”. Waves had nothing substantial, they rather were processes between substances.

The rising of so-called “wave mechanics” deeply transformed this vision of Nature. Young’s experiments showed without ambiguity that, at least at microscopic levels, neither substantial matter nor even radiations could be said to be “corpuscles” or “waves”. On the contrary, it revealed that their true physical reality was none of them: we can no longer talk of the electron as a “corpuscle of matter” if it starts to behave like a wave as soon as we don’t disturb its motion (direct observation); we can no longer talk of the photon as an “elementary electromagnetic wave (or radiation)” if it starts to behave like a corpuscle as soon as we observe it directly.

For Schrödinger and many others, the physical nature of microscopic objects now depended on what the observer did or didn’t do.

I cannot criticize this approach, as they tried to interpret as faithfully as possible what went straight against all our conceptions on objects and processes so far.

But the least we can say is that it has absolutely nothing “universal”…

I just cannot satisfy myself of a physics “depending on the observer’s will”. I’m not partisan of “hidden variables” either: violations of Bell’s inequalities have been clearly established.

On another side, despite he let me the feeling his mind about it had evolved from his fundamental work in 1926 to the 1950s, Schrödinger seemed to be deeply convinced that his concept of a “wavefunction” did not apply to the microscopic only, but to all scales. He was convinced that it did have a physical reality. But, at the same time, he was perfectly conscious from the start that it was highly unstable. So unstable, actually, that the smallest disturbance or the first measurement on it, not only modified it, but destroyed it! In his lectures, he clearly says the wavefunction no longer exist after a measurement: it simply disappears and is replaced with a new one, just after the measurement. He insists on the fact that it’s not a question of time evolution, that time has nothing to do with that and the only role time can play there is to only make the situation worse in the future!

Schrödinger was strongly influenced by the works on statistical physics in the second half of the 19th century, and especially by Gibbs. His fundamental equation of wave mechanics shows it: it has the structure of a scattering equation with a complex scattering coefficient. Many attempts, including mine, have been made to formally derive this equation. None of them are fully convincing so far. Contemporary statistical physics managed to explain why the transition from “classical” to “quantum” had to go through exponentiation, basing its argumentation on the “partition function” of “classical” statistical physics. But it still remains to explain where the amplitude of the quantum signal can well derive from…

We only say it’s a “probability amplitude” because experimental results show it: it’s only heuristic. We still haven’t got any physical mechanism behind, to complete the transition on the phase.

Anyway, the whole present construction sounds everything but consistent. Take the problem of the previous bidouille. Whatever the objects now, even particles, we start with interference. This undermeans we do not directly observe the system. We only observe the results it gives. We now observe it directly. The interference term should vanish. However, we now make a 3-body system: the two first bodies + the observer, right? They are all assumed to interfere, according to the principles of wave mechanics and quantum measurement theory. So, this gives us 3 wavefunctions (one more, the observer’s). That’s one more amplitude a3, on more phase q3, to combine with a12 and q12. We find similar formulas for the resulting amplitude and phase. Now, this should give in the end a1² + a2², since interference is destroyed.

What should honestly justify both a3 and q3 so take such “suitable” values, as long as we observe the system, that the result is the vanishing of interference between y1 and y2?...

Okay. Forget about a possible observer’s wavefunction acting. Then the measurement process should be such that, all along it, the phase shift q1 - q2 should become equal to an odd multiple of p/2, namely, (2n+1)p/2, n Î Z, everywhere inside the system. Again, why? How observation could modify the dynamics of the phase shift? In what way?

You won’t have lost sight that the wavefunction was defined as a probabilistic distribution in Euclidian space along time: y = y(x,t). So, it remains a signal under its conventional form. This was justified by the fact that all “useful” observations happen in ordinary space and evolve in ordinary time: what might or might not happen outside this frame is normally unreachable to the observer, and therefore considered “useless” or “meaningless”.

Quantum mechanics was made as a, if not the, physical theory of measurement and observation. The only quantities that matter are called “observables”. And when people extended its principles to space-time relativity, they agreed in saying that the Galilean concept of Schrödinger’s wavefunction couldn’t hold anymore as such and has to be modified to satisfy the transformation properties of the larger Lorentz rotation group and, overall, the finiteness of the speed of light: the “wavefunction” thus became more a “statefunction” or some “field operator” acting on population states (or equivalently, energy levels).

Somewhat ironically, Galilean wave mechanics derived from Planck’s work on oscillators and couldn’t reach the goal of properly describing a system of oscillators confined into a box when time-relativistic effects become non-negligible…

Besides, Schrödinger stayed reluctant to believe in a possible “collapse” of his wavefunction into discrete values (“quantum jumps”), despite he defended the results obtained by Planck…

To end this discussion, let us recall that Prigogine, long after Schrödinger, based his own arguments on deterministic chaos and the possible transition from “classical” to “quantum” (and back) through chaos to give much finer explanations on the structural instability of the wavefunction as an essentially local dynamical object.

We now represent y(x,t) not as a “wavefunction” or a “state function”, but rather as a trajectory in some “wave space”. However, we translated the difficulty in determining if this “wave space” has any physical reality or is merely one more mathematical tool.

 

Since De Broglie suggested to associate a wave to any corpuscle, I now wonder why Schrödinger did not try to change frame and apply the principles of statistical physics to the new one, may the final results be found in conventional 3-space. Just to see. Instead of that, he nearly applied “bluntly” those principles to quantum waves, while staying in E3.

Let’s instead consider a wave space. This is a functional space over E3 and the real line R (for time). A “local coordinate” on this wave space is a pair [y(x,t), y*(x,t)] since quantum waves hav to be complex-valued (as Feynman pointed out in his lessons, opposite to the situation in classical physics, real-valued waves are not sufficient in quantum mechanics, we also need their imaginary parts – just as for the refraction index, we need its imaginary component to calculate the reflexion part). Consequently, any physical field f(x,t) on E3xR will leave place to a “superfield” F[y(x,t), y*(x,t)] on the wave space. Such a “superfield” (which has nothing to do with supersymmetry, by the way) is clearly a functional over E3xR. Physically, this represents the transition between “corpuscular” (or “point-like”) to “wavy”.

There’s no apparent objection in applying the principles of statistical physics to waves y(x,t). We just have to be careful of the dynamics involved: statistical physics was built for substantial media, made of corpuscles. Waves do not collide, they interfere. Precisely. So, instead of, say a gas, made of N corpuscles randomly colliding, we rather have a non substantial medium made of N waves randomly interfering. These waves do not need to be “wavefunctions” or “wavepackets”: as points x of 3-space are elementary, the waves y serving as coordinates in the wave space should rather be taken as elementary as possible, i.e. as monochromatic plane waves. Thus, any “function” F[y(x,t), y*(x,t)] of these basic waves will be able to give more complex waves, such as polychromatic ones, wavepackets (compact waves), etc, according to the shape of F.

A system made of N free corpuscles in ordinary 3-space had 3N degrees of freedom, a system made of N free waves in wave space will have 2N degrees of freedom in this space (but obviously 2N infinities in E3xR, indicating we’re now dealing with continuous and no more discrete objects).

We can even say more: we can say that (y,y*) is the location of some discrete object in wave space (the equivalent of the corpuscle in E3), while [y(x,t), y*(x,t)] represents a corresponding continuous object in E3xR.

Changing frame, leaving E3 for a frame better adapted to waves in E3, we have “discretized” waves without doing any special physical process. Each wave there can now be viewed as an isolated entity, whereas it was seen as a continuous process in E3.

 

Can we solve this way the measurement problem and the “collapse” of the “wavefunction”?

Let us rename our local coordinates in wave space (f,f*). A “wavefunction” or “probabilistic wavepacket” in E3xR can be built in wave space as some combination y[f(x,t), f*(x,t)]. If that combination is linear, we have a superposition of monochromatic plane waves. We can even build it as y[f(x,t), f*(x,t),x,t]: such last relations are local on E3xR. But let us first restrict to global ones. y[f(x,t), f*(x,t)] is the wavefunction of a system we don’t directly observe. As soon as we will, y will “degenerate”. What’s interesting with (f,f*) is that they always correspond to perfectly determined states with finite energy and momentum in E3xR. Should our measurement give us such a state, then y[f(x,t), f*(x,t)] should reduce to the corresponding “wavy coordinate” [f(x,t), f*(x,t)]. And this corresponds to y = d, the Dirac distribution. More precisely, we should have:

 

y[f(x,t), f*(x,t)] = d[f(x,t) - f0(x,t), f*(x,t) - f0*(x,t)]

 

where [f0(x,t), f0*(x,t)] is what we obtain.

We better see what may happen with local relations. Assume the result of the measurement occurs at t = 0. Then, at all t < 0, y[f(x,t), f*(x,t),x,t] is some physical state we don’t observe anyway. At t = 0, this physical state is reduced into d[f(x,0) - f0(x,0), f*(x,0) - f0*(x,0),x,0] = y[f(x,0), f*(x,0),x,0] and it remains like this until a new measurement is done.

Well, I don’t know if this is a possible explanation or even solution, but what I can see for the time being is that we no longer have any discontinuity in the measurement process: in Schrödinger’s (and al) interpretation (in E3xR), the discontinuity was on wavefunctions [y(x,t), y*(x,t)]. In wave space, we have no discontinuity on [f(x,t), f*(x,t)] at all and y[f(x,0), f*(x,0),x,0] has no reason to show discontinuities, unless it has some very special behaviour in E3xR. The transition is rather continuous. As f(x,t) is of the form aexp[±i(k.x - wt)] with constant amplitude a (the most basic waves!), f(x,0) is perfectly regular.

That’s what I see for the time being: still better than nothing…

 

As for interferences, I’ll check for the next time.

 

 

B118: WHEN TRYING TO APPLY WAVE INTERFERENCES TO BIOLOGY LEADS TO NEW SURPRISES...

Le 22/07/2015

I don’t know if the drawing (3) in the last bidouille is visible, it’s not on this computer. Yet, it has been correctly converted, as I could check last time. Should anyone has visualization pb, thanks to let me know, I’ll ask the webmaster for further info.

Searching for some « nice » properties of wavepackets that could show possible applications to biology, i did find interesting results… but i also raised one more rabbit…

Just can’t believe this…

Okay. Let’s review some general properties first.

According to one of the basic postulate of quantum mechanics (it’s still a postulate…), the corpuscular energy of a given physical system equals the energy of its corresponding wavepacket:

 

(1)               mc²/[1 – v²(t)/c²]1/2 = hf(t)

 

where m is the mass of the corpuscle at rest and f(t) the frequency of its wavepacket. This equality extends to the momentum so that:

 

(2)               mvi(t)/[1 – v²(t)/c²]1/2 = ħki(t)  ,  vi(t) = [c , v(t)]

 

It will be enough to consider (1). The frequency of the signal in the reference frame at rest (proper frame) of the particle is:

 

(3)               f0 = mc²/h

 

Unless m varies in time, f0 is a constant. Still more generally, if q(x,t) is the phase angle of the wavepacket y(x,t) = a(x,t)exp[iq(x,t)], then ki(x,t) = q(x,t)/xi and f(x,t) = 2pq(x,t)/t is its frequency at point x, time t. Using (1), it can always be associated with a velocity field v(x,t). Remember v depends only on time for “perfect” rigid bodies. Any real body remains subject to deformations, may they be small, so that different parts of the body may not move at exactly the same speed.

Take now two cellular wavepackets y1(x,t) = a1(x,t)exp[iq1(x,t)] and y2(x,t) = a2(x,t)exp[iq2(x,t)] associated with the biological cells. It has been shown in previous bidouilles that ther’s no reason why such wavepackets should not exist, or would vanish for some “suitable” reasons, as they result from constructive interferences of less complex wavepackets (namely, those of proteins). When the two cells interact, their wavepackets interfere. We’re interested in this interference, to see what may or may not happen. We so have a resulting wavepacket:

 

(4)               y = aexp(iq) = y1 + y2 = a1exp(iq1) + a2exp(iq2)

 

Expressing the amplitude a from the amplitudes of the two initial wavepackets and their phase shift is easy:

 

(5)               a² = a1² + a2² + 2a1a2cos(q1 - q2)

 

Important detail I forgot to mention: all amplitudes and frequencies are assumed non-negative, as we’re dealing with matter.

Expressing the resulting phase from the initial ones requires more mathematics. It reveals to be convenient to write this relation with both the half-sum and the half-difference of phases. We first use the property of the exponential function to put (4) under the form:

 

y = a1exp[i(q1 + q2)/2]exp[i(q1 - q2)/2] + a2exp[i(q1 + q2)/2]exp[-i(q1 + q2)/2]

= exp[i(q1 + q2)/2]{a1exp[i(q1 - q2)/2] + a2exp[-i(q1 + q2)/2]}

= {cos[(q1 + q2)/2] + isin[(q1 + q2)/2]}{(a1 + a2)cos[(q1 - q2)/2] + i(a1 – a2)sin[(q1 - q2)/2]}

= (a1 + a2)cos[(q1 + q2)/2]cos[(q1 - q2)/2]{1 – a12tan[(q1 + q2)/2]tan[(q1 - q2)/2]} +

+ i(a1 + a2)cos[(q1 + q2)/2]cos[(q1 - q2)/2]{tan[(q1 + q2)/2] + a12tan[(q1 - q2)/2]}

 

From what we deduce that:

 

(6)               tan(q) = [tan(+) + a12tan(-)]/[1 – a12tan(+)tan(-)]

 

where a12 = (a1 – a2)/(a1 + a2) and tan(+) and tan(-) are respectively short for tan[(q1 + q2)/2] and tan[(q1 - q2)/2].

The process is nothing new: it’s commonly used in modulation. Formula (6) resembles tan(a+b). It does reduce to the tangent of a sum of phase angles when a12 is unity.

In the general case, all phases and amplitudes depend on time and space variables. Never mind, we need find the expression for the resulting frequency f. So, we have no other choice but to derivate (6) with respect to time. Easy but quite painstaking. The result is:

 

(7)               f = ½ (f1 + f2) + ½ (f1 – f2)a12[1 + tan²(-)]/[1 + a12²tan²(-)] + a12’tan(-)/[1 + a12²tan²(-)]

 

with the prime standing for time derivation.

Right. We’ll be back to formula (7) soon, but first have a look at (5). As -1 £ cos(q1 - q2) £ +1, (a1 – a2££ (a1 + a2)². The resulting amplitude is therefore minimum each time the phase shift q1 - q2 is an odd integer multiple of p:

 

WHEN THE TWO CELLULAR WAVEPACKETS HAVE OPPOSITE PHASES, THE RESULTING WAVEPACKET IS THE SMALLEST POSSIBLE.

 

It can even be zero if a1 = a2! We have here a simple possible explanation of the wavepacket reduction at large scales, requiring no arguments on dissipation. Actually, this is somewhat similar to spin opposition in Fermi pairs: the resulting spin is zero. Here, the idea is the same.

On the opposite, a will be maximum each time q1 - q2 is an even integer multiple of p:

 

WHEN THE TWO CELLULAR WAVEPACKETS ARE IN PHASE, THE RESULTING WAVEPACKET IS THE HIGHEST POSSIBLE.

 

To reuse the spin analogy, it’s like if the two spins pointed in the same direction. So, it wouldn’t be Fermi anymore, but Bose. The square of the resulting amplitude is then even greater than the sum of the squares of the initial amplitudes.

In between, each time q1 - q2 will be an odd integer multiple of p/2, there will be no interference:

 

TWO WAVEPACKETS WITH PHASE SHIFT q1 - q2 = (2n+1)p/2, n Î Z, DO NOT INTERFERE. THE RESULTING AMPLITUDE IS JUST a² = a1² + a2².

 

It’s still greater than each of the initial amplitudes, but no greater than their sum. We do have an amplification, but not the best one.

We now turn to (7). When q1 - q2 is constant, the two wave 4-vectors are equal: k1i = k2i. So, the two frequencies are equal: f1 = f2 and the second contribution in (7) vanishes. If, simulatenously, a12 is constant, the third contribution vanishes as well and the resulting frequency reduces to the mean frequency fmoy = ½ (f1 + f2).

A surprise occurs as soon as q1 - q2 varies and a12 is not zero (i.e. a1 and a2 are different), even if the initial amplitudes are constant. Then (7) reduces to:

 

(8)               f = ½ (f1 + f2) + ½ (f1 – f2)a12[1 + tan²(-)]/[1 + a12²tan²(-)]

 

which is not positive definite! As a consequence, we can find f = 0 and even f < 0! f = 0 happens for:

 

(9)               (f1 + f2)/(f1 – f2) = -a12[1 + tan²(-)]/[1 + a12²tan²(-)]

 

Can we have this everywhere, anytime? Yes, since f1 and f2 varies. So:

 

WHEN (9) OCCURS, THE RESULTING FREQUENCY MAY BE GLOBALLY ZERO.

 

Yet, we’re dealing with matter, positive amplitudes and frequencies. The point is not really to find a zero frequency. The point lays in the fact that, if we go back to (1), it will send us back to a zero mass… Yet, absolutely no mass has been destroyed. We only find a wavepacket the frequency of which corresponds to no matter… or the quantum postulate is wrong.

Up to today, this is not the case: this de Broglie postulate has been verified.

Anyway, we are led to the same result (f zero or even negative) in the general situation, where all datas are variable. Some transformations show that f = 0 gives the following Ricatti equation for 1 + a12tan(-) = b12:

 

(10)           b12’ + fmoyb12² - f1b12 + f1 = 0

 

Even if this equation cannot be solved through quadratures, there’s no reason why it should only have trivial solutions. Besides, b12 = cte is verified iff it’s a root of fmoyb12² - f1b12 + f1 = 0 and only involves a12tan(-) = cte.

 

THE RESULTING FREQUENCY MAY BE ZERO OR EVEN NEGATIVE IN THE GENERAL CASE, WHERE AMPLITUDES AND PHASES VARY.

 

In the “less worse” situation, to what could correspond a resulting wavepacket with non zero amplitude, but zero frequency? It’s generated by matter, through matter interaction and no longer corresponds to any material pair!!! 8(((((

Notice this remains true for the wave 4-vector…

 

Well, the only explanation I found so far is:

 

WHAT WE GET IS A PHYSICAL ENTITY (A WAVEPACKET) THAT NO LONGER SENDS BACK TO ANY MATERIAL SUPPORT…

 

The thing becomes even worse when f < 0: then the resulting wavepacket would send back to an antimaterial support!!!

There’s no salvation from thermodynamics: should we take phases angles as the ratios of thermal energies and temperatures, the results would be exactly the same… we would only replace the mechanical frequency with the thermal one, that’s all.

 

Are we opening one more new door or is it just a particular property of two-cell interaction?

 

Let’s have a look at the n-cell interaction. The resulting amplitude satisfies:

 

(11)           a² = ån=1N an² + 2åån<p = 1N anapcos(qn - qp)

 

For the resulting phase angle, we have:

 

y = aexp(iq) = ån=1N yn = ån=1N anexp(iqn)

 

giving:

 

(12)           tan(q) = [ån=1N ansin(qn)]/[åp=1N apcos(qp)]

 

Thus:

 

D²[1 + tan²(q)]f = ån=1Nåp=1N {[anfncos(qn) + an’sin(qn)]apcos(qp) + [apfpsin(qp) – ap’cos(qp)]ansin(qn)}

= ån=1Nåp=1N {anap[fncos(qn)cos(qp) + fpsin(qn)sin(qp)] + (anap)’sin(qn)cos(qp)}

= ån=1Nåp=1N {anap[½ (fn + fp)cos(qn)cos(qp) + ½ (fn - fp)cos(qn)cos(qp) + ½ (fn + fp)sin(qn)sin(qp) - ½ (fn - fp)sin(qn)sin(qp)] + (anap)’sin(qn)cos(qp)}

= ån=1Nåp=1N {anap[½ (fn + fp)cos(qn - qp) + ½ (fn - fp)cos(qn + qp)] + (anap)’sin(qn)cos(qp)}

 

D = åp=1N apcos(qp)  ,  D² = ån=1Nåp=1N ancos(qn)apcos(qp)

 

D²[1 + tan²(q)] = D² + [ån=1N ansin(qn)]² = ån=1Nåp=1N anapcos(qn - qp)

 

Set f = 0. Even for all an constant, we still find:

 

(13)     ån=1Nåp=1N anap[½ (fn + fp)cos(qn - qp) + ½ (fn - fp)cos(qn + qp)] =

= ån=1Nan²fn + ån¹p=1N anap[½ (fn + fp)cos(qn - qp) + ½ (fn - fp)cos(qn + qp)] = 0

 

and we still have non trivial solutions because of the frequency shifts fn - fp.

So, it’s not a particular property of two cells.

And it’s still something different from an energy-free soliton, for we would need have a gyroscopic term in the Lagrangian density of the wavepacket, that would depend on it and couple to its space-time derivatives.

 

No. Here we have “something” we can clearly identify as a physical entity, generated by cell interaction, but “free of matter” (f = 0) or able to deliver a negative energy (f < 0). Now, it does exist, since its amplitude a is zero only when a1 = a2 and q1 - q2 = (2n+1)p, a very specific situation.

 

 

B117: A CONTROL COMMAND CHAIN FOR THE CNS

Le 17/07/2015

We’re at last back to neurobio, as i think i finally understood how the brain works. If what i’m going to talk about today is correct, then the central nervous system does work a completely different way than Turing machines and the two just cannot be compared.

My central pb for long was: “how the hell can neuron networks work, according to what biologists learn us?” There was nothing coherent.

Back one more time. Electrical synapses are easy to understand: the nervous signal can propagate two-day, down or back up, there’s 100% of chance it’s transmitted from one neuron to another one. Chemical synapses are the exact opposite: the nervous signal can only propagate one-way, it vanishes in the presynapse to the benefit of a scattering wave, neurotransmitters may or may not reach their receptors and anyway, cherry on the cake, might the incoming resultant be lower than the threshold, leaving the receiving neuron silencious, this last on can always self-activate opening its Ca2+ channels (oscillation mode).

Question: what the need for a transmission, then?

If you transmit a signal, it’s usually made to be received…

If the network is so made that, whatever the information carried through it, a given neuron can “decide” to remain silencious or activate, then there’s no need for a collective participation.

That’s the reason why I didn’t go further than the neuron function (B36). Besides, still according to biologists themselves, the cortex looks more like a “jungle of connections” than anything “structured”, in the sense of “dedicated”, which does not prevent specialized areas for as much.

Now, it was collectively agreed that chemical synapses were a significant evolution of nervous systems. So, there should be a much more thorough reason for that. Indeed, why would it be a “significant evolution” if the causal link was broken and if there was no dedicated circuitry at all at the global scale?

This is all absurd if we try to reason in terms of time sequences or even recursivity and it confirms that the brain does definitely not work as a Turing machine, i.e.:

 

(1)               {set of given initial datas} -> [convenient program] -> {set of results}

 

Right on the contrary, it works this way (as shown by bioelectrical datas):

 

(2)               {set of results or goals to reach or satisfy} -> [proper graphs] <- {set of needs}

 

In (1), results are unknown or programs would be useless. In (2), needs and goals are known in advance and the system has to find paths in its neuron graph to satisfy these goals. For a given goal to reach, there is not a unique graph: many possibilities exist, that lead to the same result.

Apart from mathematical calculus (the results of which are not known in advance), all more “concrete” goals are based on the principle according to which the system must remain homeostatic. So, it’s all about control-command, a discipline i was not so bad at.

Let’s take an example: thirst.

The need is rehydrating, as thirst comes from a lack of water.

The goal is to reach back a normal hydrating: that’s the result. Our system must remain homeostatic (i.e. regulated) with respect to water.

To satisfy this need and reach this goal, we can draw a control-command chain:

 

(3)                

 

 

 

 

  

 

(N = need, E = error, G = graph, C = command, P = process, R = result)

 

The loop is generally not linear. G and P are operators. We have:

 

(4)               E = N – R

(5)               C = G.E

(6)               R = P.C

 

As for any other control-command system, we must find C = 0 as soon as E = 0, that is, when the goal is reached. The graph operator G acts as long as E > 0, i.e. the need remains greater than the result. As said above, G and P aren’t linear in general. The feedback on R guarantees regulation, so that the whole chain is homeostatic: when the system feels a biological need, the neuron graph is activated to control-command the corresponding process until the feeling vanishes. I’m thirsty (E = N), I drink (succession of orders and motions, E = N – R, R increases, E decreases) until I have enough (E = 0).

In computer science algorithmics, programming is based on two fundamental principles that have to be satisfied: proof of correctness (proof that the program does compute the required function) and proof of end (or stop, proof that the program does not loop).

Here, these two principles remain (whatever the paths in the neuron graph, the graph operator must properly command the process so that the goal is reached within a finite time). In addition, we have the general homeostatic condition over the system (the system must be regulated so that a decrease in something is compensated for by a bring from the surrounding environment or a saturation in something is reduced by a decrease in the corresponding substances).

There are different yet equivalent ways of solving (4-6), depending on which quantity we’re interested in. We can, for instance, solve it for the error E. Then, we get:

 

(7)               E + P.(G.E) = (Id + P.G).E = N

 

A priori, there’s absolutely no reason why the graph and the process ops should commute. If we solve for the command C, we get:

 

(8)               (Id + G.P).C = G.N

 

And if we solve for the result R:

 

(9)               (Id + P.G).R = P.G.N

 

If the operator on the left is reversible, we can express R as a function of N:

 

(10)           R = (Id + P.G)-1.(P.G).N

 

and the final condition R = N can only be reached in finite time iff (if and only if) P.G vanishes. Since, in practice, G induces P (motor system), this normally reduces to G = 0 (the solicited graph turns silencious) at time tend when the chain starts at time t = 0.

 

We don’t care anymore about each neuron composing the graph to transmit for sure the information E coming out of the comparator, all we ask for is this information to be transmitted, one path or another, and produce the proper command on the process.

X-net was conceived for information to reach the receptor for sure whatever the road used.

Somewhat similarly here, paths are chosen so that the information in gives a command out and the whole regulation chain works properly. Remember these paths are not static, but dynamical: should the info was “stuck” somewhere, another path would be taken from that point. As a result, individual transmission abilities become secondary. What becomes relevant is the collective ability to transmit information throughout the whole graph. Errors may occur, they are even a characteristic feature of animals brains, they don’t make a problem considering this: 1) any error occurring during the regulation process appears back into the difference E and is to be corrected at the next passing through G and 2) neuron networks integrate “code correction” in this ability of individual neurons to self-activate. So, if the incriminated neuron remains silencious while it should be active, it can always self-correct opening its calcium channels (unless it’s damaged, of course).

Suppose now such a damage occurs between two processes. Then, plasticity applies. It’s an amazing property of evolved systems that they are able, whether to use other paths to reach the same goal (self-adaptation) or to make new connexions between sane cells that will serve as many new paths.

This is simply unreachable with inert materials entering prefixed programs.

Only when the damage is too large (involves too many neurons) or localized but touching critical areas will the brain be unable to satisfy the corresponding needs.

 

When we take for G the whole neocortex, the induced process P is called “consciousness”. We have a (large) set of needs in and a (large) set of goals-to-reach out.

Comas correspond to G = 0 at different scales. The larger the scale, the deeper the coma.

Cerebral death corresponds to G = 0 globally. Eqs (7-9) then give E = N (no need can be managed by the system on its own), C = 0 (control-command off), R = 0 (no reaction of the system). Let t = 0 be the instant where this situation occurs. We then have G(0) = 0.

Apart from artificial coma, all we can do for the time being is to plug-in artificial substitutes to the natural command C to maintain vital processes. We can act upon P, but not G. So, I have to shunt over G with an artificial regulating system, say A and G becomes G’ = G + A in the above equations. When the system works on its own, A = 0 and G’ = G. When it does not anymore, G = 0 and G’ = A.

The question becomes really interesting when, after a delay T, the system restarts on its own, while nobody expected it. Quite funny…

Because the loop (3) is rather general. When there’s no serious damages, G(0) = 0 looks like a “secure mode”. This can be understood as the brain, with the heart, are the two most vital organs in the body so that the CNS will preserve itself the best it can against dysfunctions.

 

What will make it go out of this secure mode, if it can no longer make any decision on its own, since all its graphs are off?...

 

The only way out is some external reactivation. Not necessarily external to the being, but at least external to its biological body. Whether another Being or the same being in another state.

I see no other solution.

 

 

 

 

 

B116: CLASSICAL MOTIONS SPENDING NO ENERGY AT ALL

Le 12/07/2015

No worries: i rewrite this article for i found it much too complicated and badly structured. We will, of course, talk about solitons again, since they show the same features as the PSI bodies we’re working on: compact wavepackets. Before that, I’d like to talk about something I discovered when reviewing my refs on mathematics about singular solutions of differential equations. I will restrict myself to 4D classical motion, as field models follow the same principle, as we’ve been showing all along these recent bidouilles.

Let us first recall the mathematical problem. Let x be a variable, y = f(x) a function of x and F(x,y,y’) = 0 a first-order differential equation, with y’ = dy/dx. We say a solution ys = fs(x) I a singular integral of F = 0 if ys satisfies both F = 0 and F/y’ = 0 at y’ = ys’. Such a solution obviously satisfies the differential equation from the start, but it cannot be obtained fixing any value of the integration constant, as it is the case for any other regular solution of F = 0. To find ys, we need eliminate ys’ from the couple of equations F = 0, F/y’ = 0 than check which solutions obtained this way are consistent with F = 0 or not. Only the first ones will be considered as singular integrals.

There’s a straightforward and very general application of this to classical mechanics. Let t be the time variable, x(t) a function of time and v(t) = dx(t)/dt its velocity. We are interested in finding particular motions of an incident non-deformable solid body of mass at rest m under the influence of an external field that can either be a free wave or produced by a non-deformable solid source with mass at rest m’, distinct from the incident body. These particular motions must be those for which:

 

(1)               L[x(t),v(t),t] = -m(t)c²[1 – v²(t)/c²]1/2 + m(t){G[x(t),t].v(t) - f[x(t),t]} = 0

 

and, simultaneously:

 

(2)               L/v(t) = m(t)v(t)/[1 – v²(t)/c²]1/2 + m(t)G[x(t),t] = 0

 

that is, motions with constant action all the time and zero generalized momentum. These motions become particularly interesting to search for since Hamilton’s formalism immediately implies that:

 

(3)               H = v(t).L/v(t) – L = m(t)c²/[1 – v²(t)/c²]1/2 + m(t)f[x(t),t]

 

everywhere along these trajectories. If such motions exist, this means they spend no energy at all. Physical logics would suppose such trajectories cannot exist, or only lead to fixed points, or are out of physical domains. We’re going to see this is not at all the case.

From (2) and (3), we get:

 

(4)               G[x(t),t] = -v(t)/[1 – v²(t)/c²]1/2

(5)               f[x(t),t] = -c²/[1 – v²(t)/c²]1/2

 

We have to solve for v(t), since what we are looking for is the motion of the incident body. x(t) is its position at time t in Euclidian 3-space, it also the point where the components of the external field are observed, at this time. Doing this, we assume that the observer stands on the incident body or is this body itself. Dividing (4) by (5) directly gives:

 

(6)               v(t) = dx(t)/dt = c²G[x(t),t]/f[x(t),t]

 

whereas solving for v(t) in (5) gives

 

(7)               v(t) = cn{1 - c4/f²[x(t),t]}1/2

 

and reporting this result in (4)

 

G[x(t),t] = n{1 - c4/f²[x(t),t]}1/2f[x(t),t]/c

G²[x(t),t] = {1 - c4/f²[x(t),t]}f²[x(t),t]/c² = f²[x(t),t]/c² - c²

 

or, in 4D notations:

 

(8)               GiGi = c²

 

Surprisingly enough, we fall back onto an « old » result, B89: the very same condition as for the motion under an event horizon… (8) is actually the maximal condition for a G-field or any velocity field according to classical space-time relativity.

We quickly verify that the Lagrange equations of motion (d/dt)L/v(t) = L/x(t) brings nothing more, as they are identically satisfied. Our “singular” equations of motion are ntirely contained in the system (6)-(8). In (6), the G-potentials are known, since they are solutions of a Maxwellian system of field equations. If there is a source, v’(t) is its velocity: there’s no reason why it should coincide with v(t). Anyway, knowing v’(t) and the source distribution m’(x,t), we deduce G(x,t) and f(x,t) at any observation point x in E3 distinct from the origin (the cog of the source). Then, we make x and x(t) coincide and we solve for (6). If the source is point-like, accessible xs are everywhere outside the source. If the source is dusty, the incident body can be found, either outside or inside the source “cloud”, centre excepted. Inside the cloud, the field equations must be solved with a non-zero right-hand side term.

 

THERE EXISTS TOTALLY ENERGY-FREE 4D CLASSICAL MOTIONS WITH CONSTANT ACTION, NAMELY ALL SOLUTIONS OF (6) AND (8).

THESE MOTIONS ARE SINGULAR INTEGRALS OF THE LAGRANGE EQUATIONS OF MOTION, SO THAT CAUCHY’S UNICITY THEOREM NO LONGER APPLIES. IN OTHER WORDS, NONE OF THESE SOLUTIONS CAN BE UNIQUELY DEFINED.

 

Not only are these solutions everything but trivial, but they even make a large class.

Understand: against all odds, they have nothing exceptional…

Nowhere in my literature on theoretical mechanics did I ever heard of such possibilities. I suppose it’s mainly due to that conception with have that mechanical systems spending no energy at all are simply “metaphysical”. Well, this is apparently not the case and we’re now quite accustomed to discovering “new behaviours” that contradicts our conceptions or rationality.

Amongst all physically possible solutions of (6)-(8), one can find solitons. These are amplitude-bounded oscillatory motions:

 

(9)               x(t) = a(t)exp[ib(t)]  ,  a(t) = 0  for  t ³ tf

 

for some instant tf [if tf in infinite, then the asymptotic condition is a(¥) = 0]. Again, we’re more used to find boundaries in space. But, the variable here being time, compact waves or wavepackets have to be bounded in time (or time-limited). So what, after all?

Inserting (9) into (6) yields to:

 

(10)           [a’(t)  + ia(t)b’(t)]exp[ib(t)] = c²G[a(t),b(t),t]/f[a(t),b(t),t]

 

where the prime is for the time derivative. It should be clear that this is only possible if the term on the right is itself an oscillating function:

 

(11)           G[a(t),b(t),t]/f[a(t),b(t),t] = V[a(t),b(t),t]exp{iQ[a(t),b(t),t]}

 

Hence the system of coupled equations:

 

(12)           a’(t)cosb(t) - a(t)b’(t)sinb(t) = V[a(t),b(t),t]cosQ[a(t),b(t),t]

(13)           a’(t)sinb(t) + a(t)b’(t)cosb(t) = V[a(t),b(t),t]sinQ[a(t),b(t),t]

 

to what we can add:

 

(14)           [a’(t)]²  + a²(t)[b’(t)]² = V²[a(t),b(t),t]

 

According to our starting hypothesis, at t ³ tf we must have x(t) = 0, that is, we must be at the origin. There, according to the Maxwell model, both G and f are awaited to be infinite. However, their ratio is not necessarily. A simple reasoning easily shows that we can rather expect to have:

 

(15)           V[0,b(t),t] = 0  for  t ³ tf

 

Indeed, if we reach the origin at t = tf and if we’re supposed to stay there, then both a(t) and a’(t) must vanish for t ³ tf, while we expect b’(t) (a frequency) to remain regular. If (15) holds, it shows that G and f can well diverge at the origin, their ratio remains regular there, equal to zero, only indicating f diverges quicker than G, a result consistent with (8).

Free G-waves are typical of the situation. They can easily give birth to energy-free, amplitude-bounded oscillatory motions of incident bodies under their influence. The same happens for EM-fields and free waves, as G/fG = A/fEM.

 

Similar procedures can be done with 4-parameters fields. However, we do need some gyroscopic contribution [second term in (1)], for if this term is absent then, as we can already see in (6), the only solution is v(t) = 0 and f = -c², leading to a fixed position: nothing interesting…

In particular, the Maxwell model leads to no non-trivial such solutions:

 

£G = (c²/8pk)Wij(x)Wij(x) – pi(x)Gi(x) = 0

£G/Wij(x) = (c²/4pk)Wij(x) = 0

 

implies Wij(x) = 0 which in turn implies pi(x)Gi(x) = 0. Even the extended Maxwell:

 

£G = -(c²/8pk)fpl²[1 – W²(x)/fpl²]1/2F[G(x),x] = 0

£G/Wij(x) = (c²/4pk)Wij(x)/[1 – W²(x)/fpl²]1/2  = 0

 

implies Wij(x) = 0 which in turn implies F[G(x),x] = 0. To find non trivial solutions, we need a contribution of the form Yij[G(x),x]Wij(x), with Yij skew-symmetric. Such contributions actually appear in semi-classical Yang-Mills models, assuming we keep for Wij(x) the linear part of the YM fields. The complete field being W’ij(x) = Wij(x) – i(m/ħ)[Ai(x),Aj(x)] (matrix Lie bracket), the kinematical term in W’²(x) gives W²(x), -2i(m/ħ)[Ai(x),Aj(x)]Wij(x) and a quartic term -(m/ħ)²[Ai(x),Aj(x)][Ai(x),Aj(x)].

 

(in components: W’ija = Wija - fabcAibAjc => W’ijaW’ija = WijaWija – 2fabcAibAjcWija + fabcfdefAibAjcAidAje, where the fabc = -facb are the structure constants of the group)

 

This works for instance for the unified SU(3,1) gauge group (1575 structure constants). But it’s all light-years away from our practical considerations… J

 

 

B115: GENERATING AN ELECTRIC CHARGE FROM A MASS

Le 28/06/2015

There’s however a possible way to generate an electric source from a mass one. That’s what we should look for if we start from the principle that all known fundamental interactions derivate from an original one, of a gravitational kind.

The starting point is to notice that e0 and k are universal coefficients defined in the vacuum: e0 is the electric permittivity (conductivity) of the vacuum, k is the same for gravity. So, let us rename it k0 for convenience. In matter, these coefficients become e and k. Let:

 

(1)               K0 = (4pe0k0)1/2  ,  K = (4pek)1/2 

 

When K = K0, we’re in the vacuum, i.e. outside matter. So, if matter distribution is to be enlarged through the ratio K/K0, it should be zero in the vacuum, i.e. for K = K0. A translation K -> K – K0 brings it back to K = 0. Introduce the more general mass distribution:

 

(2)               m(x,K/K0 - 1) = m0(x)f(K/K0 - 1)

 

We should thus have f(0) = 0. The function k is unit-free. Around K = K0, we can perform a perturbative development of (2) in powers of (K/K0 – 1):

 

(3)               f(K/K0 – 1) = ån=0¥ fn(0)(K/K0 – 1)n/n!

 

Assuming, for the time being, that f is smooth. Separate the even powers from the odd ones:

 

(4)               f(K/K0 – 1) = ån=0¥ f2n(0)(K/K0 – 1)2n/(2n)! + ån=0¥ f2n+1(0)(K/K0 – 1)2n+1/(2n+1)!

 

We have:

 

(5)               ½ [f(K/K0 – 1) + f(1 – K/K0)] = ån=0¥ f2n(0)(K/K0 – 1)2n/(2n)!

(6)               ½ [f(K/K0 – 1) – f(1 – K/K0)] = ån=0¥ f2n+1(0)(K/K0 – 1)2n+1/(2n+1)!

 

We assume that (5) leads to gravity and (6) to electromagnetism. When f has parity +1, (6) is zero, representing electrically neutral massive bodies. When f has parity -1, (5) is zero, representing massless electrically charged bodies. The equivalence is given through:

 

(7)               mG(x,K/K0 - 1) = ½ m0(x)[f(K/K0 – 1) + f(1 – K/K0)]

(8)               mEM(x,K/K0 - 1) = ½ m0(x)[f(K/K0 – 1) – f(1 – K/K0)] = -r(x,K/K0 – 1)/K0

 

Where r is the charge distribution (in matter, of course). The rest follows: mG(x,K/K0 - 1) generates a G-field in M; r(x,K/K0 – 1), an EM-field.

The idea under this is to consider that mass and gravity appear as the symmetric contribution of a non-symmetric matter distribution (2), while charge and electromagnetism appear as the skew-symmetric part of it. Thus, at the beginning, we have a non-symmetric G-field giving a symmetric one we call “gravitation” and a skew-symmetric one we call “electromagnetism”. In the vacuum, of course, both mG and mEM are zero, since m is zero.

 

This enables us to stay in dimension 4, use functionals on G-fields, while equally treating mass and charge distributions.

 

We could try to do the same with the two other nuclear interactions, in a semi-classical non  Abelian model, using SU(3,1) for instance, but my goal, our goal, is not the very early Universe, it’s neurophysics and parapsychology. So, I’ll “reduced” myself to electrogravity.

Its application to neurophysics becomes clearer when we remind that the neuron cell is electrically charged: at rest, its membrane is under voltage. There’s a non-equilibrium situation on ionized ions from one side of the membrane to the other. This is why using PSI models involving both G- and EM-fields is more adequate than our previous models involving G-fields only.

Our first attempt was to introduce a two-state mass distribution, source of a two-state G-field.

Our second attempt is a single mass distribution depending on an additional parameter (K), source of a single G-field from which electric charges and EM-fields can arise.

I do not pretend at all this last model is the one happening in Nature, only it’s simpler than the first one. It also reveals a close link between symmetry properties and sources.

Whatever it is “in the real life”, we can give a new and wider definition of a PSI-field, saying:

 

WE WILL CALL “PSI” A PHYSICAL FIELD OVER MINKOWSKI SPACE-TIME M WHICH MATHEMATICALLY IS A FUNCTIONAL Y[G(x),A(x),x]. THE COEFFICIENTS OF THIS FUNCTIONAL, IN A DEVELOPMENT IN POWERS OF THE Gi(x)s AND THE Ai(x) ARE x-DEPENDENT EXPRESSIONS GIVEN BY QUANTUM FIELD THEORY. AS A CONSEQUENCE, WE COULD EVEN BE MORE PRECISE, SAYING IT’S A FUNCTIONAL Y[y(x),y*(x),G(x),A(x),x] OVER M, ALSO INVOLVING WAVEPACKETS.

 

As particular examples of such fields, we find non linear couplings between gravity and electromagnetism, non linear gravitational self-couplings and non linear electromagnetic self-couplings. Other couplings involve self-couplings of the wavepacket and couplings between the wavepacket and the two interaction fields.

All known quantum field theory is therefore contained in a single PSI field expression. Take the SU(3,1) isospace-time, you’ll get all couplings of the Standard Model so far.

 

All this deep theoretical work seems very far away from our concerns but serves to build, justify and reinforce arguments in favour of the existence of the PSI. What we are doing, from articles to articles, is to build the most consistent physical theory of the PSI possible. To achieve this, we need connect it to the existing. That’s why, how painstaking it can be, we have no other choice, if we want to treat the subject seriously, than to talk of relativistic quantum field theory.

The living, i.e. thermodynamically active, does not reveal the PSI, or it would have been done for long. That’s why we have to turn to quantum theory, see what we can or cannot get out of it.

The alive neuron “only” give birth to mental processes: this is purely biological.

What is likely to come “next” involves the dead neuron, where biology has nothing to learn anymore, except the dismantling of the biological cell.

On the contrary, the wavepacket, once formed, does not dismantle. It can be altered, even destroyed, by negative interferences, but it does not dismantle.

The field Gi(x) is useful to describe the body. The field Ai(x), to describe mind. An animal is an autonomous system made of a body, a mind and a coupling between them. If we want to be as complete as possible, even if we only schematize, we have to take Gi(x), Ai(x) and their couplings, as well as their sources and the couplings between them, into account in a description of the PSI.

RQFT shows that “dead” matter continues producing interaction fields. The most current model is that of condensates: quantum condensates replace “active” fields in “afterlife” processes. They aren’t the only one: any wavepacket acts as a source of interacting fields. It’s systematic when it’s fermionic. When it’s bosonic, the condition is to be complex-valued, i.e. having a variable phase. “true neutral” wavepackets produce no source, but in the existing catalogue of elementary particles, we know no such particles that would be their own antiparticles, while not being the vectors of a fundamental interaction.

 

 

 

Minibluff the card game

Hotels