https://www.scienceforums.net/topic/46143-stirling-turbine/?tab=comments#comment-1148735

but it is also something of a fresh start. So, I'm here starting a new thread

To get right into it, I purchased several identical Stirling engines from https://www.stirlinghobbyshop.com/ that is not a promotion, just a fact.

What I liked about these were they came in kit form, so I could do various modifications.

The first modification was simply to cut down on heat loss by replacing the steel bolts that came with the engine with nylon bolts, and also to add insulation where necessary, and to use a vacuum insulated flask, basically to minimize all heat transfers as much as possible, other than heat transfer through the working fluid (air) inside the engine.

This was the first test, which I had been thinking about for years. I first mentioned the idea on the Stirling Engine forum back in the year 2010:

https://stirlingengineforum.com/viewtopic.php?f=1&t=478

I wrote (edited for length)

QuoteStirling Engine Thermodynamics

I've been reading quite a bit about thermodynamics lately. Especially in regard to the fact that when a gas is "made to do work" it looses heat...

Now, formerly I had been under the impression that a Stirling engine functions by means of a temperature differential... the air travels back and forth from one end of the chamber to the other and picks up or looses heat in that way...

But I'm becoming aware that there is also apparently something a little more subtle going on, that is, when the air in the chamber heats up and expands and then does work against the piston - the heat does not only travel to the "heat sink" at the cold end of the chamber but some of the heat is actually converted into work. In other words, what cools the hot expanding air back down is not so much, or not only coming into contact with the cold end of the chamber but heat is also lost on account of the gas being made to do work against the piston.

What I'm wondering is just how much heat is actually being absorbed in this way i.e converted into work as opposed to the heat being absorbed by the heat sink (the cold end of the chamber at ambient temperature).

If more heat is extracted as work than what actually reaches the heat sink, then theoretically, insulating the cold end of the displacer chamber against the external ambient temperatures would improve engine efficiency.

Nobody agreed with this.

Everyone tried to educate me about how a Stirling engine works. The sink is colder than the engine. Heat flows from the engine to the sink.

Nobody seemed interested in running such an experiment.

Now, my observation of Stirling engines over the years, mostly on YouTube, seemed to show that at least some Stirling engines converted ALL the heat, not transferring any to the sink.

This was mainly due to seeing an engine which someone discovered could run quite well without a flywheel.

There are several other examples that can be found.

My reasoning was that the engine was moving too fast to "sink heat" by conduction so as to cause the piston to return "on its own" without the stored momentum in a flywheel to push it back down the cylinder.

It seemed to me that if ALL the heat added in one cycle were not used up (converted to work) then the piston would not be able to return against the remaining hot expanding gas, without help from the flywheel.

My reading of old thermodynamics books and books on liquifying gases and such also had me convinced that when a gas expands and does work in a cylinder driving a piston, very cold temperatures can result, potentially, much colder than ambient.

So the first thing I did when I got one of my kits together was to run this experiment:

The engine is running on scalding hot water from the tea kettle, but without the steel bolts to conduct heat, the top of the engine felt- room temperature.

The heat could be going either way.

I really had expected, probably everybody was right and insulating the sink would quickly bring the engine to a stop.

But it did not stop.

After it recovered from the insulation rubbing on the flywheel, it actually ran, by my friends stopwatch, about 18 RPM faster, and it also ran in this condition, with the sink insulated, about an hour longer than it ran previously without the insulation.

Certainly I could not be the only one in the past century to notice these things, or do some such experiment, right?

So I did more experiments with the engine using ice.

It took 33 hours before the engine stopped running as the ice had all melted.

Repeating the experiment with an identical vacuum flask full of a solid cylinder of ice, but not starting the engine; this time the engine quit running and the ice had all melted five hours sooner.

Apparently, the engine running, actively converting the incoming ambient heat to "work" meant less heat reached the ice, so the ice took five hours longer to melt, and the engine continued running for those additional five hours. In fact I got so tired of the engine not stopping I added a piece of aluminum on top from an old appliance electric outlet box to draw down more heat.

Now both of these outcomes were predicted based on basic thermodynamics principles, but the moderators on the other forum said such a result would be "perpetual motion" and "a violation of the second law of thermodynamics", locked the thread and banned me from the forum for "invoking perpetual motion".

I tried to point out that these were stock model Stirling engines and that I was not advocating perpetual motion, l just ran the experiments because, as far as I could find, no one else ever did before.

And obviously these TOY engines were not anywhere near 100% efficient and in all cases __did stop running eventually, __but

I was banned anyway.

More detailed descriptions are uploaded to a YouTube playlist:

That basic theory I'm developing to explain these results is that the Carnot mathematical limit for a heat engine efficiency, between the two reservoirs, is based on the fact that if the engine were to produce cold lower in temperature than the sink, heat flows back into the engine from the sink which directly limits how cold the engine can get and therefore it's maximum theoretical efficiency.

But in reality, the engine can expand the gas to the point where it may be colder than the "sink", in which case, insulating the sink, thereby blocking backward heat infiltration into the engine can allow it to run better, faster, longer.

The engine will still use up the heat, and the temperatures will equalize and the engine will stop running, at least in the case of added heat.

With cold applied, below ambient, the heat comes from the atmosphere, heated by the sun, which heat may take a little longer to exhaust. But no insulation is perfect, so the ice still melts.

Could the second law of thermodynamics be a result of not making complete observations by neglecting to conducting such simple experiments?

I haven't come to any hard conclusions at this point. I have many additional modifications to make and many more experiments to run, but the results are such that at this point I would already like some comments and feedback.

Please!

]]>

It looks as it is not possible.

However, seems as there is way to derive spacetime based on system without time and dynamic.

I wrote article how it can be done.

Link to article is: https://vixra.org/abs/1812.0157

Or, direct link to pdf document: https://vixra.org/pdf/1812.0157v5.pdf

In the article, I propose following model:

1. Time and dynamic is absent on fundamental level. No any motion, no energy, nothing related to time and dynamic on fundamental level

2. On fundamental level there is Euclidean space, with al least 4 dimensions. (And yes, I know about impossibility to derive hypersurface with Lorentz metric in Euclidean space. There is solution for the theory)

3. All dimensions are equal, there is no preferred direction.

4. Reiterating that was written before – time and dynamic on fundamental level is absent. Completely. No anything like time dimensions etc.

5. There is some field or field on fundamental level. The field(s) are defined at each point of fundamental space and have values belonging to set of real numbers (scalar field). (Scalar fields, described in textbooks for QFT, have different properties than these fields, so statement about insufficient degrees of freedom is not applicable here. But lets put it aside of the discussion) There is no time or dynamics. Thereby, the fields also have no dynamics. It also means full determinism. I will call these fields fundamental ones. I suppose that the fundamental fields are smooth and are described by certain partial differential equations. Each of the fundamental fields is independent of other fundamental fields. This means that there are no other fields in the equations describing any fundamental field.

6. Quite obviously, it is not possible to add observer to the model in traditional way. Observer always requires time for its existence. Absence of time means it is necessary to add something else to add observers. Instead of time dimension, I use space dimension. Details are in article. All space dimensions, as I already write, are equal, no preferred direction. Observer is able to observe changes because I postulate that changes on consecutive 3-d hyperplanes in fundamental space can lead to appearance of observer. [These is hardest of understanding point of the model].

7. Because observer appear as result of changes of field(s) on consecutive 3-d hyperplanes in fundamental space (I reiterate, there is no changes in fundamental space, But state of projections of fundamental field(s) on consecutive hyperplanes can change), observer is not exists objectively. And even more, Universe is not exists objectively. It exists only when there is some observer which observe it. Without observer, spacetime in the model is just mathematical abstraction.

So, I propose subjective idealism in foundation of my theory. Fundamental space with defined on the space field(s) exists objectively. But, because observer cannot exists without time and dynamic, the space and fields exists in quite nontraditional way, without any ability for direct observation. Their presence can be verified only indirectly, based on how well the theory fit to observations.

As one can notice, there is no relativism at the model. There is no aether at the model. There is no motion at the model. There is no gravity at the model.

What I claim as done in the article in scope of the theory:

1. Derived anthropic principle. Yes, derived, not postulated

2. Derived principle of causality

3. Derived equations of special relativity

4. Derived principle of locality

5. Found what is gravity

6. Derived equations of general relativity. And I derived in in such way, that there is clear explanation why gravity part is absent in tensor of energy-mass.

And all above done on model without time, without dynamic, without principle of locality, without gravity.

So, I remove lots of phenomena from list of fundamental ones.

The claims, as it can be seen, are quite big. I am interesting in testing the theory, test are the results correctly derived, are any obvious weaknesses.

]]>

```
a special rule is necessary:
If the mind is locked "inward" and "outward" is not directly effective,
he can perceive himself without giving contradictory information to the outside.
The mind-body coupling (with us humans probably via the nervous system)
connects layer infinity with layer k.
The mind can probably directly take up information from layer k,
without changing them.
Conversely, to give information from the mind to the body
a complex model is required without violating layer hierarchies:
When there is a quantum effect in the nervous system and it
e.g. there are three possible target quanta that could be chosen physically at random,
then the mind can specifically change that choice,
but only choose such options
that would have been physically possible.
So the mind from level Infinite can intervene in levels k and k + 1,
without risking contradictions from his informational advantage from level infinity,
because what the body does at its instigation
he could have done "naturally" without a mind (accidentally).
A "free" mind without a body would have two problems:
1. He could not "experience" anything, since the Infinite layer cannot perceive himself without self-isolation,
so he would have no feelings / perceptions - he would be "deaf".
2. It could not do anything, would be "silent", because it could neither act at infinity layer nor at layer k.
```

```
Elsewhere, I speculated that gravitation is the interaction of the mind.
So he could influence other minds (and the bodies attached to them) gravitationally by space-time curvature - so he would be "heavy".
Dark matter could be such "free or pure minds".
Since gravity can work for all physical objects,
I assume that these are linked to mind objects.
I don't know how a consciousness arises from such pre-conscious mind objects.
In the case of reproduction, this could also be done via the parents' spirits,
but the first consciousness must have come from somewhere.
It is interesting that, according to my theory, consciousness has a similar distribution
needed inside and outside like the living cell.
I approached the whole thing from the (unpopular) dualistic perspective,
but that's how I experience my mind and body.
The layer logic is a bit farther there,
but it helps with a surprising number of questions
and has few advocates besides me ...
Link to layer logic:
https://www.researchgate.net/post/Is_this_a_new_valid_logic_And_what_does_layer_logic_mean
In German (with more details):
https://www.philosophie-raum.de/index.php/Thread/28199-Stufenlogik-Trestone-reloaded-Vortrag-APC/
Does this help to understand "mind" - and what more is to be added?
Yours
Trestone
```

]]>]]>

if we take a normal helium atom

-1 for electron

0 neutron

and +1 for proton

and then convert it to a magnetic field strength calculation for attraction or gravity

-25 electron

-+50 neutron

+ 100 proton

the neutron is both attracted and repealed to the proton and nuclear force takes place on a small and large universe.

The neutralizing force of the neutron removes magnetic attraction of the proton making the proton start out at 49-1 field strength and the electron -24 - -1 but after -1 it becomes +1. This sets a moment of singularity for the electron that now rolls around the magnetic field and spin is created as the particles are attracted on the dark side (farthest away part of the electron) and spins toward the possitive proton. This makes a perpetual motion and spin. I can explain almost all in-unified forces to explainable. I last posted all 23 pages but I’m dyslexic so it was not acceptable. I hope this brief summation is more acceptable and my apologies if not. I’ve attached a diagram of a more complex atom at work. I hope this acceptable and apologize in advance if not.

Scott A Miles, not a doctor.

According to the special relativity theory equation:

Suppose that R1 and R2 are both inertial reference systems whose relative velocity is u. If an object is static relative to R1 and the length of the object is L0 in the X axis direction. And in reference system R2, the measured length of the object is L. Then the relationship between

The following experiment is designed in the inertial reference system R1:

in above picture, M1, M2, M3 and M4 are four total reflection lens, whose cross-sectional shape are isosceles right triangles. Photonμis eradiate into the surface of M2 with the incidence angle of 45 degree. It will be reflected among the four lens. Assume energy is conservative during the reflection process, the photon μ will always stay in the cage of the four lens and vibrate forever.

Now there is another inertial reference system R2, Which is moving at a constant speed in direction of X axis compared to R1. If observed in R2, according to the second equation the four lens of M1, M2, M3 and M4 shrink in the direction X and remain stable in direction Y as illustrated in the following picture:

Because the lens are deformed in R2, the Photon μ will not in parallel with Y after being reflected by M2. Thus the photon μ will escape from the gap between M1, M2, M3 and M4 after several times of reflection.

So in the two reference systems above, two different results will be observed for one thing, which is in conflict with the fact that everything has its uniqueness.

The essence of the Maxwell’s equations and the permanent principle of light velocity

The Maxwell equations are as follows:

Among which, the experimental basis for equation 3.1 is the coulomb’s law. The experimental basis for equation 3.2 is the Biot-Savart’s Law. The equation 3.3 is deduced from Faraday’s law of electromagnetic induction and the equation 3.4 is deduced from the Ampere circuital theorem and Maxwell’s hypothesis.

During the solving process of the equations, Maxwell did not mention the reference system. But the Coulomb’s law and the Ampere circuital theorem are the conclusions based on the experiment in the reference system of the earth. So the Maxwell’s equations cannot be regarded as have nothing to do with the reference system. It is not scientific to discuss the Maxwell’s equations apart from the reference system.

Now a new theory will be brought to explain the constancy of the speed of light and the experimental result of Michelson-Morley.

The experiments done by great scientists such as Coulomb, Ampere, and Faraday are two objects that remain stationary. We define this as the reference frame 1. If we observe at reference frame 1 in another relative motion reference frame 2, the experimental reference frame 1 has a certain velocity, but the two objects in the experiment remain relatively stationary, so I have reason to conclude:

The speed of the electromagnetic wave is constant relative to its emission source.

Electromagnetic wave comes from the emission source. The Maxwell’s equations did not mention the reference system. The experimental speed of the electromagnetic is in such accordance with the solution of the Maxwell’s equations. So it can be derived that the speed of the electromagnetic wave is constant relative to the emission source. The scientists are always relatively static to the emission source when testing the velocity of the electromagnetic wave thus the testing result are always and must be a fixed value.

From this theory we can deduce that the experiment data and conclusion of Coulomb, Ampere and Faraday are all correct. Because in their experiment the electric charge, the magnet, the conductor and the coil are static relative to the reference system of the earth. So the speed of electromagnetic wave is constant. The positon of the static charge and its electric field is fixed. The positon of the magnet and it magnetic field is also fixed. But if the observer is moving relative to the experimental reference system of the emit source, the tested velocity of the electromagnetic wave will not be a constant, which comply with the speed superposition principle. Thus the Doppler effect of Electromagnetic wave can be observed.

Now let’s go back to the Morley and Michelson’s experiment. To explain the result of the experiment, we need to rely on theory one. The relative speed of the reflected light to the reflector is constant. So in the Morley and Michelson’s experiment, the emission light source, stripe interference plate and the translucent glass in the middle are relatively static. That is to say,The geometric space formed by the objects in the experiment is a constant value with respect to the wave lengthλof the light. Thus no matter how the experiment system turns or no matter whether it moves with the earth, the shift of the interference figure will not be observed.

In the above figure besides the light source S, the pellicle mirror M and the reflecting mirror M1/M2 can also be regard as light source. So the speed of the light reflected or emitted by them are constant C (velocity of light).

The most important experiment to support the theory of relativity is that a charged particle can never be accelerated to the speed of light by a linear accelerator or cyclotron. The higher the speed of the charged particle is, the more difficult for it to be accelerated. Thus the theory of relativity is put forward:

According to the Hubble’s law, the universe is expanding and the farther the distance between the stars and the earth, the bigger the radial regression rate. Suppose there are two stars in the faraway place which are moving away from the earth with the velocity of light. The two stars are near to each other. Then seen from the earth the mass of the two stars are infinite in the process of high-speed regression. The gravity of them will be infinite too .Ultimately the two stars will merge together. Otherwise seen from the double star reference system which are separated with the light velocity, the earth and the sun will merge together by the appealing force of the infinite mass while it is definitely not true.

The essence of force is interaction. But without the interaction, we can never perceive the existence of the objects let alone to measure the force. In the view of the conventional concept, the interaction between objects is unchangeable. Now a new theory is put forward:

The interaction force between objects is changing with the relative radial velocity of the object.

g(u) is the transformation factor , which is the radial velocity between objects. The following transformation equation on F and F0 can be estimated based on equation below.(only I guessed)

In the above formula, F0 is the apply force with the relative speed of the accelerating field is zero. “+”stand for the backward motion to the accelerate field. “-”stand for the forward motion to the accelerate field.

Now we know that the particle can’t be accelerated beyond velocity c by an electric field accelerator in the earth. While in another reference system the velocity stack still exist which is to say in different reference system the moving speed of an object can exceed c. that is why it is not necessary to rectify the Galileo transformation to Lorentz transformation or add a reference system for the Maxwell’s equations.

Note that in the Theory of Relativity force is also a covariate. But the deduction is based on the change of the mass. The transformation formula of force is the derivation result of the momentum equation. In the Theory of Relativity many variables are covariate. While in this theory the mass, the time and the length are constant value.

]]>

This error is not fatal to his theory.

An erroneous conclusion is that next to

with a large mass, the speed of time increases. This means that near a large mass, the rate of decay

radioactive elements should be increased, and experiments show the opposite. So don't demand it

new experiments, and understand the reason.

Considered example. The neutron flies away from the observer at the speed of v, because of the Doppler effect, the length of its de Broglie wave increases,

the natural frequency decreases, the decay constant increases according to the formula

Now let's consider the same situation, but let the neutron not fly away from the observer, but on the contrary, fly in the direction of the observer.

In this case, its de Broglie wavelength will decrease due to the Doppler effect, and its natural frequency will increase, but it is constant

the decay rate will still increase according to the above formula. Means Doppler frequency shift and deceleration

the time of a moving object is not the same.

And the fact is that the photon in addition to its frequency has another important characteristic. This is the natural width of its spectral line, which is inversely proportional to the time of its radiation. And it is this time of photon emission that determines the speed of time flow.

What is the speed of time in a physical system? This is the speed of all processes in this system. The atoms and particles that make up the physical system interact with each other through the radiation/absorption of particles that carry the interaction, both real and virtual, mainly photons. The radiation time of these particles is finite, and it determines the speed of all processes in the physical system. The frequencies of these particles, which are equal to f=E/h, do not matter in terms of the speed of physical processes.

An analogy with radio engineering is relevant here. In an amplitude-modulated radio signal, the information is carried by its envelope.For this information, the value of the carrier frequency does not matter.

Thus, near a large mass, both the time of photon emission and its frequency increase simultaneously. This means that the natural relative width of the spectral line of the emitted photons decreases. And, therefore, clocks that operate as quantum frequency standards should run with GREATER RELATIVE ACCURACY near a large mass.

On this effect, we can build an experimental test of this idea. It is necessary to set a couple of clocks at the foot of mount Everest, which

they represent a quantum frequency standard, and during sufficient time to determine the standard deviation of their measurement intervals

time. Then move them to the top of mount Everest for the same time and again determine the standard deviation, and then compare the reseltates.

If the relative accuracy of the movement of the clock at the bottom is higher, then this will be evidence in favor of my version of nature

gravitational deceleration of time, so m in favor of the quantum theory of gravity of Yanchilin.

]]>

**Does nature have a foreknowledge of observer’s motions and actions - Scientific proof of God based on quantum phenomena **

I will start with one of the puzzles in quantum mechanics. In the “Which-Way” and quantum erasure experiments, how does a distant light source ‘know’ whether or not the polarizers are there, so that the source can ‘aim’ the photons to only one slit or to both slits, so as to form a Gaussian pattern or an interference pattern, respectively ?

According to current paradigm, the process of emission of quantum particles such as photons and electrons is completely random and casual. In this post, a new paradigm is proposed as follows.

Just as the point on the ground where a ball will land is predetermined by its initial condition (initial velocity) at the instant the ball is kicked, the point on the detecting screen of a double-slit experiment where a photon (or electron) will be detected is predetermined at the instant of photon emission, by the initial conditions of the photon. This is based on a new insight about the internal structure and dynamics of elementary particles such as electrons and photons.

Imagine a mechanical version of a double-slit experiment. Suppose that there is a wall in which two holes are made and, behind this wall, another wall at some appropriate distance serving as the ‘detector screen’. The holes are designed so that the ball can exit at different angles ( ‘diffraction’ ). A boy/a girl repeatedly kicks a ball towards the holes. Suppose the boy/girl can precisely aim the ball to any given point on the ‘detector’ wall. This would be a miracle because it requires extreme fine tuning of the initial condition of the ball ( initial velocity). The boy can then repeatedly kick the ball towards either of the holes and can form an interference pattern. Note that the ball always passes only through one or the other slit, it cannot pass through both slits at the same time. As another example, imagine a super intelligent football player who can precisely aim the ball to any given point on the net, by deflecting it from either of the poles. The football player can form an interference pattern on the net by repeatedly kicking the ball.

The new insight is that the interference patterns have been formed not because the ball ‘interfered’ with itself after passing through the two holes, but just because the boy/girl are super intelligent and can precisely aim the ball to any given point on the wall. This means that it is not even necessary for the wall to have two holes. The boy can form an interference pattern by using only one hole. Not only this. The boy can form a Gaussian pattern, an interference pattern, or any arbitrary pattern, regardless of whether only one or both holes are open, regardless of the distance between the holes, regardless of the distance of the ‘detecting’ wall from the holes.

My argument is: to say that photons emitted at random from a light source can form an interference pattern is the same as saying that the boy formed an interference pattern on the ball by kicking the ball randomly, i.e. without any fine tuning of the initial conditions. Obviously, forming an interference pattern by the ball requires almost infinite fine tuning of the initial condition of the ball that it takes a miracle to create an interference pattern. The conclusion is that a photon (electron) in the double slit experiment is emitted with almost infinite fine tuning of its initial conditions to precisely aim it to a specific point on the screen, and form an interference pattern or a Gaussian pattern.

The question is: who is fine-tuning the photons (electrons) during emission ? The emitting atoms? Or the emitting atoms conspiring with the detector screen ? These have too infinitely small intelligence to be able to do the task of infinitely fine tuning the initial conditions of a photon. Obviously, the fine tuning requires infinite intelligence. God is fine tuning every emitted photon (electron ).

Imagine a physicist doing a double-slit experiment using light from a galaxy one billion light years away. Now, we know that an interference pattern is formed when both slits are open, and a Gaussian pattern is formed when only one slit is open. How is this possible with light from a galaxy one billion light years away ? The answer: one billion years ago, God foresaw that a physicist would do a double-slit experiment at some specific point and time in the universe, and sent photons for his experiment. Imagine aiming a photon from one billion light years away to a specific point on the detector screen to create an interference pattern!!! God had/has a foreknowledge of whether only one or both slits will be open and aimed the photons accordingly.

Just as the super intelligent boy can direct the ball to any given point on the ‘detector’ wall, so can God. God can form a Gaussian pattern, an interference pattern or any arbitrary pattern regardless of whether only one or both holes are open, regardless of the distance between the holes, regardless of the distance of the detecting screen from the slits. The question is: why then do we always observe an interference pattern when both slits are open, and a Gaussian pattern when only one slit is open. Why then does the interference pattern consistently depend on the distance between the slits and on the distance of the detecting screen from the slits ? The answer is that God just wanted it to be that way and we call these laws of nature (optics). God does not act in arbitrary ways and he always respects the laws he created. But, occasionally, he may ‘violate’ those laws with purpose and we call these miracles. It would be a miracle if an interference pattern was to be formed with only one slit open.

Perhaps physicists might claim to understand the Thomas Young double-slit experiment without the need of God’s interference, such as by probability, wave function and wave function collapse. For now I will not get into a discussion of the puzzles created by these interpretations. But there is one experiment that defies all logic and for which there can be no scientific explanation as we know science: the “Which-Way” and quantum erasure experiment. How can a distant light source know whether or not there is a polarizer, so that it can aim the photons only to one of the slits or to both slits ? Does the source of the entangled photons have eyes, and, is it intelligent? The only way out of this puzzle is that God can see/foresee whether the polarizers are/will be there or not, and aim the photons accordingly. The “Which-Way” and quantum erasure experiment, together with other quantum phenomena, is an overwhelming evidence of a supernatural, intelligent being.

What about quantum entanglement ? Suppose that two entangled photons A and B, one with X-polarization and the other with Y-polarization are sent in opposite directions in space. The detectors are placed light years away. Suppose that photon A was detected as X-polarized. Then, instantly, photon B’s polarization will be fixed to be Y. The problem is: how did the photons communicate instantly?

The quantum entanglement puzzle is a problem created by quantum theory itself and there is actually no such puzzle. The polarizations of the photons are determined at the instant of emission and there is no need of ‘communication’ between photons light years apart. The ‘communication’ happens at the instant of emission of the entangled photons.

The grand question is: why do quantum phenomena point to God in such overwhelming way ? I think this is because God had/has a grand plan. He wants humanity to discover Him not only through religion and faith, but also through nature and science.

]]>- What is curvature of space time ? it's a variation of relative lengths (metrics)..

- Would a physical system be invariant if we scale it up or down ? (and why?)

- what if we consider general relativity as a field in space that define the scale of matter... the "graviton" would transmit scale variation between two spaces

- matter would always tend to accelerate where scale is smaller (picture a circle around which scale varies, just as with curvature, some part of the circle would contains "more" space is the scale is smaller). So a particle at the center of the circle would have more chance to change state in that direction (that would be the gravitational force)

- energy/mass would define the scale around it.. at large scale it would be the gravitational field, at microscopic scale it would define particles as topological singularities (knots of spacetime, where dimension is higher than the surrounding space.. somehow like chord theory, only the space would be of higher dimension only locally as the effect of local extreme scale variation)

- if the scale is relative, that would allow laws of physics to be invariant by homothetic transformation

- the would allow the whole universe to be fully invariant by homothetic transformation (which is required for us to not have any information about what's outside of it) ]]>

21 minutes ago, swansont said:

Bob is stationary in his frame. You always need to realize what frame you are in when analyzing.

Isn't everything in motion? The Earth spins on it's axis, the Earth spins round the sun, the Sun spins round the galactic core and the galaxy itself is in motion set by the big bang, it is not known if the universe itself may be moving or vibrating as well. So to be stationary one would need to understand all frames of motion exactly and counteract each one exactly and since galactic motion can only be estimated stationary from a human perspective is just not possible

]]>So, I question it.

It’s difficult for me to accept, as I understand the concept. Of course it could be my understanding.

I read that for what seems the most part particles only exist for a short time. Also, amazingly presented as popping in and out of existence. Then there is my understanding of Heisenberg‘s uncertainty principle. Note, I am not questioning it. I am accepting it as true to a point, and that point is that you can’t measure both at the same time. To me that seems awkward if not amazing.

So, I question it.

Then there was the concept of ether. That was shown to be not true. I am not saying that it is, but the fact that it was shown to be not true seems amazing. No I’m not going to question it, except to ask that if the universe is full of continuous particles why was it so easily shown that the ether does not exist.

Then there is solid to somewhat solid matter. Why do I exist? No, I am not asking a philosophical question. Why am I cohesive? Do all particles pop in and out of existence? I am aware that it doesn’t quite happen that way though I do not exactly understand how it does happen. In a sense I’m parroting another amazing rhetoric. I am however wondering given my somewhat limited understanding of quantum physics, if I am solid, or is anything else for that matter?

My mind can be changed, even taught, but I am not going to simply accept what seems incomprehensible. So, I think about some apparently amazing presentations and think there must be some other explanation.

With Heisenberg, the presentation, pick a book or video it really doesn’t matter, comes across as you can not measure both as related to an it. You can measure it’s momentum, you can measure it’s position, but when it comes down to (it) you can’t do both.

Realistically, I accept particle wave duality, but have trouble accepting that a particle can travel through two slits at the same time, but I can accept that a wave can. I also understand from different diagrams that some don’t agree as to how a wave propagates. Which can really muddy up a thought. It seems to get more difficult, at least for me to mentally picture such thing, spherically. So, I prefer the presentation where you are looking down on presumably a wave capable medium that upon disruption attempts to propagate in the allowed directions. The produced wave/s go through both slits.

I have trouble with analogies, but opposing waves can peak in various places. Like particles popping in and out of existence. The wave is an analogy of energy propagation through a/various fields of energy/energies. The medium, well for lack of a best definition, is simply vast.

Back to Heisenberg and the inability to measure both of (it’s) momentum, or it’s position. I’m suggesting that is because it is not an it but rather two peaks created by the act of measuring. The wave peaks when it interacts with the measuring devices. In essence the wave peaks every time you take a measure.

I don’t think the particle is amazingly going through both slits. What is seen is/are peaks of a propagating wave/s where it interacts with opposition. For a moment of time a particle is created and observed. Among other things the photon is not displaying a gravitational attraction or reaction to an intense gravitational field. The energy waves created by the distant star are going around. The observed photon does not exist until the wave/s interacts with the observer.

I can’t think of any reason why these thoughts might be seen as an attempt to dismantle physics. To me they just seem to be a rational way for someone who is not an expert to grasp the reality of a few things often presented as amazing, but true!

(Among other things the photon is not displaying a gravitational attraction or reaction to an intense gravitational field). This part I am still thinking about? I have another thought that requires gravity, that possibly comes from particles that pop in and out of existence due to opposing energy fields, but that is another thought.

]]>

Any help would be greatly appreciated, thank you!

a little more info: the way i create the feeling is hard to explain but ill do my best. Its like trying to expand your heart... like stretching it outwards, or like expanding your chest but not actually doing it. It makes me fidget a little and have minor muscle jolts around my neck and and shoulders after the first 3 seconds. If I prolong the feeling then the minor jerks reach my fingers/hands and my shoulders.

]]>Early in this video, we see in the physics programmed into Universe Sandbox 2 says about the vast amounts of energy contained within very narrow laser beams, with a typical laser pointer beam narrowed to about .1 nanometers will cause small parts of the earth's surface to be vaporized.

Then it occured to me we don't need to consume entire planets in order to make thermal black holes, we can just catch a beam in a circle of magnifying glass that bends the laser light in a circle around and around until it's less than .1 nanometers wide. This method for bending and narrowing beams are demonstrated in these videos:

Now, there is one issue with getting more output than input from energy this way, it is that the thermal micro-black hole will spew out no more energy than than we put in to make the lasers that make them, if they were small enough to evaporate safely in a lab.

The solution was to use that energy to make more beams, this way we can use magnetic nano rings to harness - in an electric field - the composite angular momenta of these micro black holes and get back overtime more energy than it took to make the first laser. The issue with that is the energy emitted during total evaporation will be in the form of rays and as you saw in the last video it was stated that there's no way to turn a ray into a beam.

Unless you have ultra pure crystalline materials and can harness an effect known as ballistic resonance, as explained in this link:

It was demonstrated that "mechanical oscillations can be excited due to internal thermal resources of the system"; with said internal resources being the crystal itself. Such excitation being the "amplitude of mechanical vibrations can grow without external influence" - that amplitude being the maximum amplitude of the blue shift we see in the photo they provided - "for example, the heat can flow from cold to hot. This behavior of nanosystems leads to new physical effects, such as ballistic resonance,” and this in effect will give you that laser you need to get started because as explained in the link this hot beam " first almost decayed, but then revived and reached nearly the initial level. The system came to its initial state, and the cycle repeated itself. "; because there are two points in a micro black hole, known as a dipole, where the energy realeased is a semi-beam that oscillatory excitation from ballistic resonance should chissel the dipoles into two lasers.

]]>

What would happen if make the prefrontal cortex 2x thicker? Or temporal lobe? Or give him 2x bigger hippocampus?

]]>Genetic code quantum analysis.pdf

Within the mechanism of the genetic code and therefore among the twenty amino acids, Glycine is distinguished by its absence of radical. Its radical is reduced to a simple hydrogen atom which in a way simply closes the "base" structure common to each amino acid. The quantum study of this *glycined base*, identifying with Glycine, reveals singular arithmetic arrangements of its different components.

**New quantum chart**

This quantum study of the genetic code is an opportunity to propose a new type of table describing the quantum organization of atoms. In this chart, illustrated in Figure 5, the different quantum shells and subshells are presented in the form of chevrons. At the top end of each rafter are indicated the names of the different shells and subsells; at the left end of these chevrons, the numbers of orbitals and electrons of these different shells and quantum subshells are indicated. At each chevron vertex is the orbital where the quantum number *m* = 0. The orbitals with positive quantum number *m *are progressively positioned towards the top of these chevron vertices and the orbitals with negative quantum number *m* are progressively positioned towards the outside left of these chevron vertices.

In the appendix, the same type of table is presented describing the quantum organization of the shells and subshells up to the 5th shell (*O*) and 15th subshell (*5g*). This innovative presentation, more explicit in describing the quantum structure of the atomic elements, will be used in various tables of this quantum study of the constituents of the genetic code.

In other words "c" in "e=mcc" should be calculated as "c=f(G)", why?

Well we know that c is a "special speed" which defines "difficulty to move at that speed"

But we also know that galaxies move away faster than "c", then we know that "G" defines the "difficulty to move"

Connecting all of that:

Galaxies are able to go faster than c because they have lower G which means a higher c

Cosmic expansion may also be cause partially by an "outer sphere" in addition to this.

That is the lower G gets the higher c gets, so in this way we can fix relativity without requiring space expansion.

This would also mean that the universe is finite which suggests that if an object "big bang" is big enough in terms of the "share of total material"

Then gravity is a lot weaker and that enables it to accelerate also considering that the geometry of an explosion disperses mass in all directions which means the force that slows galaxies down is also dispersed.

This says that "c" should not be a constant but should be calculated inverse to a proportion of G, because c is the speed in which you gain weight or interact a lot with G.

In other words "c" is the speed of light but that is only true locally not in terms of local group to local group in other words

c=escape velocity of the local group

This is just part of my complete view of the universe

More here:

**url deleted**

I tried to use the best world I could thought of but there may be some small inconsistencies because English is not my native language.

I would love to clarify any doubt

]]>
More than 2500 years ago, man had found magnetic interaction and make use of it; 200 years ago (July 21, 1820 A.D.), H.C.Oersted declared that magnetic field is related to electric field; In the next 75 years latter, Ampere, Faraday, Lorentz etc sum up various magnetic interaction rules, their achievements all hint that magnetic field is related to the motion.

In fact, Oersted, Ampere, Faraday, Lorentz had discovered all the basic nature of magnetic interaction (1.electric field, 2.motion), but they all didn't pay enough attention to it, didn't organize it systematically, the scientific climate changed later, became to the theological religion that cover up with the banner of science, basic scientific research in many fields has gone astray, magnetism research is the one of them, it needs to be pulled by industry to get partial progress, a lot of misunderstandings and mysteries exist for a long time, in the 120 years, the world has a great changes, but magnetic field research has no great progress.

It is not very difficult to crack magnetic interaction nature. The common magnetic interaction is formed by the superposition of a large number of micro interaction, that to crack magnetic nature, it only need to go into micro world and work steadily.

About 2000 A.D., some excellent magnetic field researchers and achievement appears in folk. 2017 A.D., the basic nature of full region magnetic interaction had been cracked; April 2018, "Theoretical Analysis of the Field Principle - Magnetic Field (1th piece)" is declared, the nature of magnetic interaction is no longer a mystery to this world.

Magnetic interaction can be divided into two levels, 1.the field level of magnetic interaction, 2.the superposition level of magnetic interaction. The field level magnetic interaction belong to the deeper magnetic interaction, it is related to the more basic magnetic properties. The superposition level magnetic interaction is formed by the superposition of a large number of basic magnetic interactions, the common magnetic interaction all are the superposition level magnetic interaction, such as the magnetic interaction of magnet etc.

The field level magnetic interaction

In-depth research shows that clearly: Magnetic field and electric field are unified, magnetic field is just the one appearance of electric field, there is not independent magnetic field in the world;

To say it simply, magnetic field interaction is the electric field interaction that be effected by motion. Motion can take weak effect to the electric field interaction, the electric interaction between two electrons increase by the increase of relative motion velocity of them (Lorentz force had said it), the electric interaction is the weakest when the two electrons keep relatively static (is electrostatic interaction, but not Coulomb interaction);

Compare with electrostatic interaction, the ratio of the increase of electric interaction is 1:10^11 when the relative motion velocity is 1m/S;

And can be 1:10^4 to the two thermal movement electrons.

These changes can take some effect to the world obviously, and the effect is just magnetic interaction.

To say it accuturely, magnetic field is not a pure field, it is the combine of electric field and motion.

The superposition level magnetic interaction:

The common magnetic interactions all are formed by the superposition of a large number of basic magnetic interactions, superposition can cover some basic magnetic properties, and generate some new ones;

The magnetic interaction of live wire is mainly due to the electric interaction of free electrons that form the current (current electrons), and is the electric interaction superposition of all charge;

The magnetic interaction of magnet is mainly due to the electric interaction of the orbit electrons with the same orbit direction, and is the electric interaction superposition of all magnet atoms;

No matter how complex it is, magnetic interaction can fit to electric interaction perfectly, without any independent magnetic interaction at all, because of that magnetic field is just electric field, no matter how complex it is, it is only the one interaction of electric field.

In fact, in that year, Oersted's experiment is not only show that magnetic field is related to electric field, but also means that they are the same field.

Magnetic interaction is the multidisciplinary blending physical phenomenon, it is related too much case, so it can be effected by too much case, and has too much appearance.

Such as: The magnetic interaction of live wire is related to the current in metal;

The magnetic interaction of magnet is related to the iron alloy;

The magnetic interaction of superconductor, paramagnetic action, diamagnetic effect and so on, all of the magnetic interaction are related to the character of the field source;

To crack a magnetic effect, it must to hold all related technologies at first, that only hold the nature of magnetic interaction is not enough, the superposition of magnetic interaction maybe has countless ways, so even that had holden the nature of magnetic interaction, the way of magnetic interaction research is still very long. But, magnetic interaction had been mastered, how long can other problems be exist?

detail to see

LINK REMOVED

On 1/29/2020 at 2:50 AM, lucien216 said:

"It is the same event, but different amounts of time have elapsed since then as measured by different observers. "

Hhhhmm. This is what I wanted to hear. So you saying that the universe could be, say 30 billion years old, for some observers? So the age of the universe is only relative to us then?

On 1/29/2020 at 3:12 AM, Strange said:Yes, in principle.

In fact, our view of the universe is pretty average so it would be hard for any observer to have seen a significantly greater age for the universe than us.

Quite plausible actually, not just in principle. In the inertial frame X of some planet in a galaxy 27 billion light years from here (distance measured in frame X), the universe is currently (simultaneous with us now) about 30 billion years old.

On 1/29/2020 at 3:54 AM, studiot said:It is more subtle than this.

The 'Universe' for such an observer would be quite different from the 'Universe' we can see.

...

We do not know of any observer (star etc) going at sufficient relative speed to us to observe 30 billion years.

Yes, the universe for such an observer would be quite different from what we see here since for one thing it appears to be over twice as old. Much more mature galaxies and such.

Yes, we do know of galaxies moving at sufficient speed for this. The one I mention above would have a redshift of about z=1.3 as viewed from here, and the record holder is over z=11.

Yes, I realize I'm replying to posts from January, before I registered I think.

]]>Surfing Youtube I found this video, and would like your opinions folks.

Please, watch it and let me know what you think.

I used to be skeptical about such evidence, here are both scientific and mystical ones.

What do you think? Is is real evidence or every part of it can be called into the question?

Doe it prove anything?

**video removed**

The experiment consists of launching two spacecraft from earth. one flies towards the Sun, the other away from the Sun. Each device is equipped with equipment that will accurately measure the value of the magnetic constant and transmit the results of measurements to the earth. If the value of the magnetic constant does not change, then the GRT is correct. If the measured value of the magnetic constant decreases on a vehicle flying towards the Sun, and increases on another vehicle, then the Yanchilin's formula is correct

In the link, the results of astrophysical measurements, which can be interpreted as the fact that in the vicinity of massive bodies, the magnetic constant decreases https://science.sciencemag.org/content/358/6368/1299

]]>In this thread: https://www.scienceforums.net/topic/122453-an-attempt-to-approach-a-notion-of-solubility-in-cosmology-to-explain-the-cosmological-constant/,

I proposed a mathematical solution to the cosmological constant problem. However, I have not found a physical explanation. Failing that, I found a generalization of this solution to the whole universe to validate a hypothesis that had been made in this solution

In this it is, it seems to me, a confirmation (and perhaps help to understand the problem of the cosmological constant?).

The energy density of the quantum vacuum in Planck units is:

[math] A=m_pc^2/l_p^3=\hbar(l_p^{-2})^2.c[/math]

I, on the other hand, found this unknown hypothetical quantum energy density of cosmological constant :

[math]B=\frac{1}{(8\pi)^2}\hbar(\Lambda_{m^{-2})^2.c}[/math]

and demonstrated that the cosmological constant

[math]C= \sqrt{\hbar(l_p^{-2})^2.c} \sqrt{ \frac{1}{(8\pi)^2} \hbar(\Lambda_{m^{-2 }})^2.c}=\sqrt{A} \sqrt{B}[/math]

Let's consider [math]H_0[/math] the Hubble parameter (or Hubble constant) in [math]s^{-1}[/math].

We want a dimension in [math]L^{-2}[/math] to replace [math]\Lambda_{m^{-2}}[/math].

So we'll write [math]H_0 c^{-2}[/math] instead of [math]\Lambda_{m^{-2}}[/math] to get [math] B'=\frac{1}{(8\pi)^2} \hbar (H_0/c^2)^2.c[/math], "an energy density of Planck's universe for [math]H_0[/math]".

Let's consider

[math]\rho_c=\frac{3 c^2 H_0^2}{8\pi G}[/math] , the critical density of the universe for [math]H_0[/math].

We have

[math] 3 \sqrt{A} \sqrt{B'}=\rho_c[/math]

The method of dimensional analysis for application in quantum mechanics of general relativity data operates again...

]]>There will be a cycle of mutation after every 9 generation, the evolution is a slow process and it will move on, but we can't see directly because the change taking place is very slow, but if we look at the 10 generation of older than us we could see it.The mutation (evolution) takes place gradually and it can be seen at a regular intervals (10G gap)

Law behind it : There will be a gene difference between father and son, ie the gene similarity difference is 0.1 and the similarity difference between the gene that we get from father and mother is 0.1. When the genes are transferred from one generations to other there will be change in genes of 0.1 from each generation which confrim that after each generation a small change is happening in genetic material.This change will be passed on to next generation.Which makes that after the 9th generation a small mutation takes place.

This rule will be only applicable when we compare us from that of old generation, the change will be minute

]]>

What happens?

]]>Here, graphically speaking, this is how would be the evolution of the universe following the rules of the potential energy of a harmonic oscillator:

What do you think of this evolution?

Reference:

[1] The Universe as an Oscillator https://arxiv.org/abs/1807.03864

The beginning is quite simple: You have two friends A (Alex) and B (Beatrix) resting next to each other. There is no motion, only time passing by.

Alex looks to Beatrix.

Since light takes time to travel from Beatrix to Alex, Alex observes Beatrix as it was a slight instant ago (slightly in the past).

I can make the following diagram, where the vertical axis is Time, and the horizontal is Space.

At T=0, Alex observes Beatrix at a distance & at T=-1

graph 1

But at the same time (T=0) Beatrix looks at Alex also at T=-1 (the situation is symmetric). Thus we also have the following graph

graph 2

How can that be? The conventional way of thinking is the following: when Alex slides in time, he leaves behind him a path of events (Alex existing in past times)

It goes like this:

graph 3

The bold line is the 4D existence of Alex along its life-line. Every point of this line will find Alex at a specific point of his life.

In this way of thinking there is no difficulty to understand that As Alex sees Beatrix, Beatrix sees Alex. It goes like this:

graph 4

Alex looks at Beatrix &reversely Beatrix looks at Alex. They live in the same time frame (T=0) and they see each other at T=-1.

No problem.

But what do I mean when I say "they see each other at t=-1? Do I mean that B & A below on the graph truly "exist" there?

No I don't think so. Here begins the speculation:

The speculation is that when Alex sees Beatrix behind in time, it means that the signal (the photons) send by Beatrix have reach the eye of Alex. In order to travel the distance from B to A, the photons need time. And the bottom B on the previous graph are void, there is nothing there. There is only the image of B in the eye of A. And respectively, at bottom A on the graph, there is nothing. There is only the image of A in the eyes of B.

It goes like this: the following graph shows the sliding of A in Time (to be compared with graph 3)

graph 5

And below Beatrix sliding in time

graph 6

The graph below shows Alex looking at Beatrix while sliding in time (remember that both observers do not move, they rest in place, simply sliding in time)

graph 7

And graph 8 is the reverse B observing A

graph 8

Graph 9 represents both observers while sliding in Time

graph 9

In the above graph 9, the solid circles represent the real observers A & B sliding in time, the empty circles represent the image as observed: the image in the retina of solid A and solid B. In fact, there is nothing there, following graph 5 & graph 6.

Now let's get a little more complicated: say that Alex throws a ball to Beatrix.

Graph 10 represents a ball (in red) from A to B. In orange, the image of the ball a seen by A.

graph 10

Graph 11 is the same as graph 10, but as seen by B

graph 11

In both graph 10 & 11, the ball makes the same trajectory in the same time. But observers A & B have a different point of view, the image that come to their eyes is different. In both diagrams, the image is the vertical projection of the ball to the diagonal of view. Both observers will agree that the ball was send at T=0, and received at T=4. They may compare their point of views and agree on distance & time. Alex will throw the ball and as it goes away, it goes into the past until reaching B. And B will see the ball coming from the past. It corresponds to reality. And there is no need for small a & b to "exist". In both graphs, a & b are images of A & B in the eyes of each other. There is nothing actually at points a & b.

Now let's get more complicated, here below, introducing Clong (C). Clong is an hypothetical observer out of sync, behind in Time.graph 12

Intuitively, C should be observable, I have simply to choose another observer D sufficiently far away, see below graph 13

graph 13

Here it gets a bit complicated: what is D actually observing? Is he observing C, or the image of A (labeled a' on the graph)

My answer is that D has in his eyes the image of A. There is no real superposition, on one hand you have a real object C at specific coordinates, on the other hand you have an image on the retina of an observer.

Reversely, A has in his eyes the image of D (labeled d on the graph). Observer A does not see the void at d, there is nothing at d. The only one who will see d is observer C, exactly the same way B sees A.

Any comment appreciated.

]]>