Duda Jarek
Senior Members
Posts
535 
Joined

Last visited
Content Type
Profiles
Forums
Calendar
Everything posted by Duda Jarek

Let's take some NPproblem: we have a verifier which can quickly say that given input is correct or not, but there is huge number of possible inputs and we want to tell if there is a correct one (find it). Such problem can be for example finding a divisor (RSA breaking) or finding a key that if we use it to decrypt the beginning of the file, there will be significant correlations (brute force attack). Imagine we have a chip with 'input' and 'output' in which is implemented (e.g. FPGA): IF 'input' verifies the problem  THEN send 'input' to 'output'  ELSE send next possible 'input' (cyclically) to 'output' such that this chip uses only basic logic gates  computations are made in some small number of layers  IT DOESN'T USE CLOCK (we can do it for some NP problems like 3SAT). Now connect it's 'output' and 'input' to make a loop. Such loop will be stable if it has found a solution (which can be transferred out). If there would be a clock, in each cycle there would be checked one input. If not, it's no longer classical computer: while removing the clock/synchronization we are destroying the order and releasing all it's statistical, quantum properties to make what physics do the best: solve its partial differential equations. Now it became continuous and so it can go with energy gradient to some local minimum, but the only local minimals are stable states (solutions). Every other is extremely unstable  electron fluid won't rest until it find a solution. The statistics, differences between propagation times will create extremely fast chaotic search for it. I'm not sure, but it could find solution qualitatively faster than classical computer. I know  there can be a problem with nonlinearity of transistors? If yes, there are plenty of technologies, maybe some of them would handle with it? This loop computer idea is practically simplified from timeloop computer idea: http://www.scienceforums.net/forum/showthread.php?p=453782

No As I've already written: If physics could stabilize this causality loop, it should be done. If not  it should be stabilized by breaking it's weakest link  by making that the prediction would gave wrong answer. Why this link? Because this link would require an extremely precise measurement of some process which is already not preferred energetically. Creating causality paradoxes should be even less, so for physics it should be easier for example to shorten the time of this reverse temporal propagation, especially that the rest of this causality loop is just classical computation. I think that such spatial, purely classical loop should already have (smaller but still) strong tendency to stabilize. Without clock it would be pure hydrodynamics of electrons: http://www.scienceforums.net/forum/showthread.php?t=37155

Some physicists believe in the possibility of instant time travels. Let's assume hypothetically something much simpler and looking more probable  that physics of fourdimensional spacetime we are living in, allows for microscopic loops which include time dimension. If they would have at least microseconds and we could amplify/measure them (Heisenberg uncertainty principle...), we could send some information back in time. Observe that computers based on such loop could instantly find fixed point of given function: Let's take for example some NPproblem  we can quickly check if given input is correct, but there is huge (but finite) number of possible inputs. So this computer can work:  take input from the base of the loop,  if it's correct, send back in time to the base of the loop the same input, if not  send the next possible input (cyclically). If there is correct input, it would be the fixed point of this timeloop, if not  it should return some trash. So we would only need to verify the output once again after all (out of the loop). Can such scenario be possible? General relativity theory says that local time arrows are given by solutions of some equations to the boundary conditions (big bang). CPT symmetry conservation suggest that there shouldn't be large difference between past and future. These arguments suggest so called eternalism/block universe philosophical concepts  that spacetime is already somehow created and we are 'only' going through it's time dimension. I've recently made some calculations which gave new argument, that such assumption actually gives quantum mechanics: Pure mathematics (maximizing uncertainty) gives statistical property  Bolzman's distribution  so it should be completely universal statistics. If we would use it to find distribution on constant time plane, we would get stationary probability distribution rho(x)~exp(V(x)). If we would use it to create statistics among paths ending in this moment, we would get rho(x)~psi(x) (quantum ground state). If we would use it to create statistics among paths that doesn't end in this moment, bu goes further into future, we would get rho(x)~psi^2(x)  like in quantum mechanics. So the only way to get QMlike statistical behavior is to threat particles as their paths in fourdimensional spacetime. So spacetime looks like fourdimensional jello  both 'tension' from past and future influence the present. http://www.scienceforums.net/forum/showthread.php?t=36034 It suggest that particles should for example somehow prepare before they would be hit by a photon. The question is if this can be measured (uncertainty principle)? If yes  are these times long enough to be useful? Observe that if the answer is yes, such computer could e.g. break RSA in a moment. To make cryptosystems resistant to such attacks, they should require long initialization (like based on Asymmetric Numeral Systems).  I though if we could reduce the required number of bits transferred back in time, and it looks like one (B) should be enough (this algorithm intuitively looks less stable?):  if B then 'input' > next possible 'input' (cyclically)  if 'input' verify the problem  then transfer back in time B=false  else transfer back in time B=true. If it could it should stabilize on B=false and some solution. Such algorithm means that it uses input(B) from some physical process which can predict (in microseconds) for example if there will be photon absorbed and on the end emits this photon or not. If physics could stabilize this causality loop, it should be done. If not  it would be stabilized by breaking it's weakest link  making that the prediction would gave wrong answer. I believe here has just started discussion: http://groups.google.com/group/sci.physics/browse_thread/thread/c5f055c9fc1f0efb  To summarize: the verifier is completely classical computer, but when it is coupled with some effect which can transfer data a few nanoseconds back in time, the physics should make that this couple create stable causality loop. But it could only happen if it by the way solve given NP problem (or e.g. find a keys so that decrypted message looks to have significant correlations). If for given instance of problem, there would be created dedicated chip  which makes calculations layer by layer (without clock), it should make the verification in nanoseconds  such jumps are easier to imagine. This suggests some nice though experiment: make such loop but much simpler  just spatial (it's a tube in four dimensions): take a chip with verifier of some problem and the algorithm from the first post. Now instead of sending 'input' back in time just connect it to the 'input'. Such loop should quickly check input by input and finally create stable loop if it can... Is that really???? This scenario require the clock, doesn't it? What if there wouldn't be a clock...? Shouldn't it find the solution practically instantly?

Is spacetime really curved? Embedded somewhere?
Duda Jarek replied to Duda Jarek's topic in Relativity
There is a problem with measuring angles  they depends on the reference frame. GR rotates locally light cones  solutions for waves of interaction, which makes that it looks like we are living in Minkowski space. These rotations of solutions can be caused by some field. So even when they will be confirmed by an observation, the internal curvature won't be needed. 
Is spacetime really curved? Embedded somewhere?
Duda Jarek replied to Duda Jarek's topic in Relativity
But we can also think about GR that we have flat spacetime with some interacting fields. It allows to understand how it is a result of microscopic physics (like photongraviton scatterings) and we avoid huge amount of philosophical questions for internal curvature interpretation. The maximum time travel possibility this picture allows is to turn our reasonresult line into opposite time direction and after some (minus) time turn back. It would create some loop which cannot spoil actual situation (like killing grandfather)  so it suggests that the future is already somehow set  eternalism (which assumption creates QM (see link)). Oh I've forgot to mention that such causality loop would create some very strange topological singularity ... so probably all time travels are forbidden ... ? 
Is spacetime really curved? Embedded somewhere?
Duda Jarek replied to Duda Jarek's topic in Relativity
So we don't need any mystical internal curvature... SRT can be derived from the assumption that light travels with given constant speed. Gravity waves has the same. From the point of view of spacetime, it says the angle (45 deg) of solutions for waves which carries (probably?) all interactions. The only difference in GRT is that these solutions: interaction/light cones has changed their directions. 
Is spacetime really curved? Embedded somewhere?
Duda Jarek replied to Duda Jarek's topic in Relativity
Cannot GR be viewed as such gravitongraviton scatterings? 
Is spacetime really curved? Embedded somewhere?
Duda Jarek replied to Duda Jarek's topic in Relativity
I was recently told ( http://groups.google.pl/group/sci.physics.foundations/browse_thread/thread/e6e26b84d19a17ff# ) that there is quite new Relativistic Theory of Gravity (RTG) by Russian physicist Logunov, which explains GR without internal curvature. It uses speeds of clocks. But any clock (mechanical, atomic,biological...) bases on some chain of reasonresult relations. These relations are made by some interactions  transferred by some waves ... So speed of clock can be translated into wave propagation speed. I have a question: does strong (electro)magnetic field bend the light? In field theories like QED we add some nonlinear terms (like phi^4) to introduce interactions between different frequencies... From the second side electromagnetic interactions has some similarities to gravity interaction... Have You heard about such experiments, calculations? Dedicated experiment should find it, and so using different EM fields we could tell a lot about fundamental details of QED... (and maybe GR...) 
Is spacetime really curved? Embedded somewhere?
Duda Jarek replied to Duda Jarek's topic in Relativity
Two dimensional manifold with positive constant internal curvature should create a sphere... (somewhere...) If spacetime is really immersed somewhere, when it should intersect with itself, it probably could change topology instead (like going through a critical point in the Morse theory). I think it's the concept for time travels/wormholes(?). 
Is spacetime really curved? Embedded somewhere?
Duda Jarek replied to Duda Jarek's topic in Relativity
Ok, You are right ... let's call it immersion ... So what do You think about the possibility of instant time/space travels? 
Is spacetime really curved? Embedded somewhere?
Duda Jarek replied to Duda Jarek's topic in Relativity
So maybe it only looks like it was a result of a curvature? Einstein  Hilbert equations connects internal curvature with energy/momentum, but how physically is made this connection? Until we understand it, these equations aren't argument for internal curvature. Especially that it can be an analogous effect to minimal optical path principle: because of interference with responses of local atoms, light travels along geodesic of metric tensor being 'refractive index'*'identity matrix'. In GR, such refractive index has to be fourdimensional and usually anisotropic (different wave speeds for different directions). It could be created for example by that interactions are transferred by some waves of field, so they creates some local structure of the field  small differences between being in different phase could cause that large interactions make interference needed to change propagation speed/direction of other waves. If GR is really the result of internal curvature and spacetime is not embedded anywhere, what would happen if it looks like it should intersect with itself? 
When light goes through different materials, it chooses path to locally minimalize distance  it's trajectory is geodesic of some metric (usually diagonal  isotropic) . It is the result of that microscopic structure of the material can reduce wave propagation speed. Microscopic models of physics usually assume that we have some field everywhere and it's fluctuations transfer interactions/energy/momentum. So maybe these microscopic structure can reduce waves propagation speeds? Reciprocals of these velocities creates (anisotropic) metric tensor (g) and so for example particles travel through geodesics like in general relativity theory. Standard interpretation of general relativity says that particles goes through geodesics because of space time internal curvature: theory, experiments suggest some equations, which looks like being a result of internal curvature of spacetime. But if we live on some curved manifold, it intuitively should be embedded somewhere(?) (and for example black holes are some spikes) So why our energy/matter is imprisoned on something infinitely flat? Why we doesn't interact with the rest of this something? What happen if our manifold will intersect with itself? (optimists say that it would allow for time travel/hyperspace jumps?...) And less philosophical, but most fundamental (to connect GR and QM) question is: how energy/momentum density can create curvature? Maybe it's not the only possible interpretation. Maybe we live in flat R^4 and GR is only the result of reducing the speed of wave propagation by microscopic structure of some field, which somehow is ruled by similar equations to EinsteinHilbert. This interpretation doesn't allow for instant time/space travel, but it get rid of some inconvenient questions ... and creates a chance to answer to the last one. So how should look such connection of QM(QFT?) and GR? What are particles? From spacetime point of view, they are some localized in some three dimensions and relatively long in the last one, solutions of let say some field to some equations. These solutions want to be more or less straight in four dimensions (constant velocities), but they turn accordingly to interactions transferred by the field. Many of them were created in big bang (boundary conditions), so their long dimension is similarly directed  creating local time arrow (GR). Bolzman distribution among such trajectories, can purely classically create QM like statistical behavior ( http://www.scienceforums.net/forum/showthread.php?t=36034 ). Are there any arguments for spacetime internal curvature other than that the equations looks like being be a result of it? What do you think about these interpretations? If the curvature is the only option, is spacetime embedded somewhere... ?

Maximal entropy random walk and euclidean path integrals
Duda Jarek replied to Duda Jarek's topic in Physics
First of all it occured that the way of thinking that we are moving along the time dimension of some already created timespace is known and called eternalism/block universe. It's main arguments are based on general relativity, but also that there is a problem with CPT conservation and wave function collapse... The fact that bolzmanian distribution among paths gives statistical behavior similar to known from QM, suggest even more  that QM is just the result of such structure of the timespace. ...that wave function collapse is for example reversed split of the particle (to go through twoslits). This simple statistical physics among trajectories gives similar behavior of QM, but still qualitatively different  particles leaves an excited state exponentially instead of making it in quick jumps, producing a photon for energy conservation. I think this difference is because we assumed that in given time, the particle is in a given point, but in fact it's rather its density spread around this point. If instead of considering a single trajectory for a particle, we take some density of trajectories with some force which wants to hold them together, the particle's density instead of slowly leaking, should wait for a moment to quickly jump to a lower state as a whole. This model should be equivalent to simpler  use a trajectory in the space of densities (instead of density of trajectories). But I don't see how to explain the production of the photon  maybe it will occur as an artifact, maybe the energy conservation should be somehow artificially added ? The question is what holds them together, to form exactly the whole particle  not more, not less? Kind of similar question is why charge/quantum numbers are integer multiplicities? I'll briefly present my intuitions to deeper physics. The first one is that the answer to these question is that particles are some topological singularities of the field. That explains spontaneous creations of a pair/annihilation, that such pair should has smaller energy when closer  create attractive force. The qualitative difference between weak and strong interaction could be due to the topological difference between SU(2) and SU(3). So the particle would be some relatively stable state of the field (in which for example the spin has spatial direction). It would have some energy, which should correspond to the mass of the particle. The energy/singularities densities somehow creates spacetime curvature...? Now if particles are not just a point, the field which they consist of fluctuates  still have some orders of freedom (some vibrations). I think that quantum entanglement is just the result of these orders of freedom  when particles interact, they synchronise fluctuations of their field. But these orders of freedom are very sensible  easy to decoherence... ps. If someone is interested in the inequality for the dominant eigenvalue ([math]\lambda[/math]) of real symmetric matrix ([math]M[/math]) with nonnegative terms from the paper: [math]\ln(\lambda)\geq\frac{\sum_i k_i \ln(k_i)}{\sum_i k_i}\qquad [/math] where [math] k_i = \sum_j M_{ij}[/math], I've started separate thread: http://www.scienceforums.net/forum/showthread.php?t=36717  Simplifying the picture The field theory says that every point of the spacetime has some value for example from U(1)*SU(2)*SU(3). This field doesn't have something like zero value, so the vacuum must have some nontrivial state and intuitively it should be more or less constant in space. But it can fluctuate around this vacuum state  these fluctuations should carry all interactions. It allow also for some nontrivial spatially localized, relatively stable states  particles. They should be topological singularities (like left/right swirl). Another argument is that if not, they could continuously drop to the vacuum state, which is more smooth  has smaller energy  they wouldn't be stable. Sometimes the fluctuations of the field exceed some critical value and spontaneously create a particle/antiparticle pair. Observe that this value around which vacuum fluctuates, should have huge influence in choosing the stable states for particles. Maybe even this value is the reason for weak/strong interactions separation (for high energies this separation should weaken). It could also be the reason for matter/antimatter asymmetry... The problem with this picture is that it looks like the singularities could have infinity energy (I'm not sure if it's necessary?) If yes  the problem could be with Lagrangian being to simple? The other question is if the field theory is really the lowest level? Maybe it's the result of some lower structure...?  I was thinking about how energy/singularities could create spacetime curvature... Let's think what is time? It's usually described in the language of reasonresult chains. They can happen in different speeds in different points of reference. These reasonresult chains are microscopically results of some (fourdimensional) wave propagations. But remember that for example light speed depends on the material ... the wave propagation speed depends on microscopic structure of ... the field it is going through. This field should be able to influence both time and spatial dimensions  slow it down from the light speed. In this picture spacetime is not some curved 4D manifold embedded in some multidimensional something, but is just flat  local microscopic structure of the field specifically slows down some time/space waves of the field. ...and for example we cannot go out of black hole horizon, because the microscopic structure of the field (created by the gravity) won't allow any wave to propagate. This picture also doesn't allow for hyperspace jumps/time travels...  Finally something the most controversial  the whole picture... Let's imagine such particle, for example one of created in the big bang. This stable state of the field in 4D is well localized in some three of dimensions, and is very long in the last one ... most of created in the big bang should choose these directions in similar way ... choosing some general (average...) direction for time (the long one) and space (localized ones)... These trajectories entangles somehow in the spacetime ... sometimes for example because of the field created by some gravity they change their (time) direction a bit  observed as general relativity... ... their statistics created (purely bolzmanian  without magic tricks like Wick's rotation) quantum mechanical like behaviour ... What do You think about this picture? Is internal curvature really necessary, or maybe it's only an illusion (like light 'thinks' that geometry changes when it changes the material)? Are rules for time dimension really special? ... is time imaginary? Or maybe it is only a result of some solutions for these rules, specially due to boundary conditions (big bang)...? 
While thinking about random walk on a graph, standard approach is that every possible edge is equally probable  kind of maximizing local entropy. There is new approach (MERW)  which maximizes global entropy (of paths)  for each two vertexes, each path of given length is equally probable. For a regular graph it gives the same, but usually they are different  it MERW we get some localizations, not known in standard random walk. It was derived in http://www.arxiv.org/abs/0710.3861 in the context of optimal encoding. In http://www.arxiv.org/abs/0810.4113 are analyzed it's localization properties. It can also suggest the nature of quantum physics ( http://www.advancedphysics.org/forum/showthread.php?p=47998 ) . In the second paper is also introduced some nice inequality for dominant eigenvalue ([math]\lambda[/math]) of a symmetric real matrix [math]M[/math] with nonegative terms, I'll write it in full generality: [math]\forall_{n>0}\qquad\lg(\lambda)\geq \frac{1}{n}\frac{\sum_i k_{ni} \lg(k_{ni})}{\sum_i k_{ni}}[/math] where [math]k_{ni}:=\sum_j (M^n)_{ij}[/math] In 0/1 matrix and n=1 case it's just the comparison between the entropies of both random walks. To generalize it to other symmetric matrices with nonnegative terms, we have observe in the case with potential ([math] M_{ij}=e^{V_{ij}}[/math]), we have optimized not average entropy, but so called (average) free energy  the inequality is the result of that [math]\max_p\ \left(\sum_i p_i \ln(p_i)\sum_i E_i p_i\ \right) = \ln\left(\sum_i e^{E_i}\right)\quad \left(=\ln(\lambda)=F\right)[/math] and the maximum is fulfilled for [math] p_i \sim e^{E_i}[/math]. Finally to get the equation above, we have to get [math] M^n [/math] instead of [math]M[/math]. This inequality is much stronger than this type of inequalities I know and quickly gives quite good below approximation of the dominant eigenvalue. Have You met with this/similar inequality? How to prove it straightforward (not using sequence interpretation) ?

Maximal entropy random walk and euclidean path integrals
Duda Jarek replied to Duda Jarek's topic in Physics
There is nice picture from another forum, that timespace looks like fourdimensional jello  both tensions from past and future influence present. Observe that this picture intuitively corresponds to general relativity also. I believe a discussion has just started there, feel welcome: http://www.advancedphysics.org/forum/showthread.php?p=48462 
Maximal entropy random walk and euclidean path integrals
Duda Jarek replied to Duda Jarek's topic in Physics
I'm sorry  I didn't realized it's supported. Here are the main equations again: Assuming Bolzman distribution among paths: that the probability of a path is proportional to exp( integral of potential over the path) gives propagator [math]K(x,y,t)=\frac{<xe^{t\hat{H}}y>}{e^{tE_0}}\frac{\psi(y)}{\psi(x)}[/math] where [math]\hat{H}=\frac{1}{2}\Delta+V[/math], [math]E_0[/math] is the ground (smallest) energy, [math]\psi[/math] is corresponding eigenfunction (which should be real positive). Propagator fulfills [math]\int K(x,y,t)dy=1,\quad \int K(x,y,t)K(y,z,s)dy=K(x,z,t+s) [/math] and have stationary probability distribution [math]\rho(x)=\psi^2(x)[/math] [math]\int \rho(x)K(x,y,t)dx = \rho(y) [/math]  To summarize, we can interpret physics:  locally (in time)  in given moment particle chooses behavior according to situation in this moment  standard approach, or  globally (in time)  interaction is between trajectories of particles in four dimensional spacetime. In the local interpretation the timespace is being slowly created while time passes, in the global we go along time dimension of more or less created timespace. In local interpretation particles in fact uses the whole history (stored in fields) to choose behavior. If according to it, we would assume that the probability distribution among paths ending in given moment is given by exp( integral of potential over the path) we would get that the probability distribution of finding the particle is [math]\rho(x)\cong\psi(x)[/math]. To get the square, paths cannot finish in this moment, but have to go to the future  their entanglement in both past and future have to influence the behavior in given moment. The other argument for that both past and future is important to choose behavior, is that we rather believe in CPT conservation, which switches past and future. Observe also that in this global interpretation, twoslits experiment is kind of intuitive  the particle is generally smeared trajectory, but can split for some finite time and have a tendency to join again (collapse). For example because in split form it has higher energy. 
Maximal entropy random walk and euclidean path integrals
Duda Jarek replied to Duda Jarek's topic in Physics
Quantum physics says that really atoms should approach their ground state (p(x)~psi^2(x)), as in my model. Single atom does it by emitting energy in portions of light. But from the point of view of statistical physics, if there were many of them their average probability distribution should behave locally more or less like in my propagator. Particle doesn't behave locally only  they don't use other particle's positions only to choose what to do in given moment, but use also their histories  stored in fields (electromagnetic, fermionic ...). So it would suggest that the statistics should be made among paths ending in given moment, but it would give stationary probability distribution p(x)~psi(x). To get the square, paths should go into the future also. Intuitively  I'm starting to think that particles should be imagined as onedimensional paths in fourdimensional timespace. The are not ending in given moment and being slowly created further, but they are already somehow entanglement in the future  it just comes from the statistics... So the time passing we feel is only going through time dimension of some fourdimensional construct with strong boundary conditions (bigbang)... ? 
Maximal entropy random walk and euclidean path integrals
Duda Jarek replied to Duda Jarek's topic in Physics
I've just corrected in (0710.3861) the derivation of the equation for propagator  probability density of finding a particle in position y after time t, which started in position x: K(x,y,t)=<x  e^{tH} y> / e^{tE_0} * psi(y)/psi(x) where psi is the ground state of H with energy E_0. At first look it's a bit similar to the FeynmanKac equation: K_FK (x,y,t)= < x e^{tH}  y > The difference is that in FK the particle decays  the potential says the decay rate. After infinity time it will completely vanish. In the first model the particle doesn't decay (\int K(x,y,t)dy=1), but approach to stationary distribution: p(x)=psi^2(x) The potential defines that the probability of going through given path is proportional to e^{integral of potential over this path). The question if the physics is local or global seems to be deeper than I thought. Statistical physics would say that this distribution of paths should really looks like that ... but to achieve this distribution, the particle should see all paths  behave globally. Statistical physics would also say that the probability that a particle would be in a given place, should behave like p(x) ~ e^{V(x)}. We would get this distribution for GRW like model  with choosing behavior locally... Fulfilling statistical mechanics globally (p(x)=psi^2(x)) would create some localizations, not met in models assuming fulfilling statistical mechanics locally (p(x) ~ e^{V(x)}). Have You met with such localizations? Is physics local or global? 
Maximal entropy random walk and euclidean path integrals
Duda Jarek replied to Duda Jarek's topic in Physics
To argument that MERW corresponds to the physics better, let's see that it's scalefree. GRW chooses some time scale  corresponding to one jump. Observe that all equations for GRW works not only for 0/1 matrices, but also for all symmetric ones with nonnegative terms. k_i=\sum_j M_ij (with diagonals) We could use it for example on M^2 to construct GRW for time scale twice larger. But there would be no straight correspondence between these two GRWs. MERW for M^2 would is just the square of MERW for M  like in physics, no time scale is emphasised. P^t_ij= (M^t_ij / lambda^t) psi(j)/psi(i) 
While thinking about random walk on a graph, standard approach is that every possible edge is equally probable  kind of maximizing local entropy. There is new approach (MERW)  which maximizes global entropy (of paths)  for each two vertexes, each path of given length between them is equally probable. For regular graph they give the same, but generally they are different  in MERW we get some localizations, not met in the standard random walk: http://arxiv.org/abs/0810.4113 This approach can be generalized to random walk with some potential  something like discretized euclidean path integrals. Now taking infinitesimal limit, we get that p(x) = psi^2(x) where psi is normalized eigenfunction corresponding to the ground state (E_0) of corresponding Hamiltonian H=1/2 laplacian + V. This equation is known  can be got instantaneously from FeynmanKac equation. But we get also analytic formula for the propagator: K(x,y,t)=(<xe^{2tH}y>/e^{2E_0}) * psi(y)/psi(x) Usually we variate paths around the classical one getting some approximation  I didn't met with not approximated equations this type (?) In the second section is the derivation: http://arxiv.org/abs/0710.3861 Bravely we could say that thanks of analytic continuation, we could use imaginary time, and we get solution to standard path integrals ? Have You heard about this last equation? Is physics local  particles decide locally, or global  they see the space of all trajectories and choose with some probability... ? Ok  it was to be rhetoric question. Physicist should (?) answer, that the key is the interference  microscopically is local, than interfere with itself, environment ... and for example it looks like a photon would go around negative refractive index material... I wanted to emphasize, that this question has to be deeply understand ... especially while trying to discretize physics, for example: which random walk corresponds to the physics better? It looks like that to behave like in MERW, the particle would have to 'see' all possible trajectories ... but maybe it could be the result of macroscopic time step? Remember that an edge of such graph corresponds to infinitely many paths ... To translate this question into lattice field theories, we should also think how does discrete laplacian really should look like...?

Data correction methods resistant to pessimistic cases
Duda Jarek replied to Duda Jarek's topic in Computer Science
I've just realized that Hamming, tripling bits are some special (degenerated) cases of ANS based data correction In the previous post I gave arguments that it would be beneficial if any two allowed states would have Hamming distance at least 2. If we would make that this distance is at least 3, we could unambiguously instantly correct single error as in Hamming. To get tripling bit from ANS we use: states from 1000 to 1111 Symbol '0' is in 1000 state, symbol '1' is in 1111 (Hamming distance is 3) and the rest six states have the forbidden symbol. We have only 1 appearance of each allowed symbol, so after decoding it, before bit transfer the number of state will always drop to '1' and three youngest bits will be transferred from input. To get Hamming 4+3, states are from 10000000 to 11111111 We have 16 allowed symbols from '0000' to '1111', each one has exactly one appearance  the state 1*******, where stars are 7 bits it would be coded in Hamming  two different has Hamming distance at least 3. After coding the state drops to '1' again and this '1' will be the oldest bit after bit transfer. The fact that each allowed symbol has only one appearance, makes that after decoding we each time drops to '1'  it's kind of degenerated case  all blocks are independent, we don't transfer any redundancy. It can handle with great error density, like 1/7 (for Hamming 4+3) ... but only while in each block is at most 1 error. In practice errors doesn't come in such regularity and even with much smaller error density, Hamming looses a lot of data (like 16 bits per kilobyte for 0.01 error probability). Let's think about theoretical limit of bits of redundancy we have to add for bit of information for assumed statistical error distribution to be able to full correct the file. To find this threshold, let's think about simpler looking question: how many information is stored in such uncertain bit? Let's take the simplest error distribution model  for each bit probability that it's switched is equal e (near zero), so if we see '1' we know that with probability 1e it's really '1', and with probability e it's 0. So if we would know which of this cases we have, what is worth h(e)=e lg(e)  (1e) lg(1e), we would have whole bit. So such uncertain bit is worth 1h(e) bits. So to transfer n real bits, we have to use at least n/(1h(e)) these uncertain bits  the theoretical limit to be able to read a message is (asymptotically) h(e)/(1h(e)) additional bits of redundancy /bit of information. So a perfect data correction coder for e=1/100 error probability, would need only additional 0.088 bits/bit to be able to restore message. Hamming 4+3 instead of using additional 0.75 bits/bit, looses 16bits/kilobyte with the same error distribution. Hamming assumes that every 7bit block can come in 8 ways  correct or with changed one of 7 bits. It uses the same amount of information to encode each of them, so it would add at least lg(8 )=3 bits of redundancy in each block  we see it's done optimally... ... but only if the probability of all of this 8 ways would be equal for this error distribution... In practice the most probably we would have the possibility without error, later with one error ... and with much smaller possibilities with more errors ... depending how does error distribution in our medium looks like. To go into the direction of the perfect error correction coder, we have to break with uniform distribution of cases like in Hamming and try to correspond to real error distribution probabilities. If the intermediate state for ANS based data correction could have many values, we would transfer some redundancy  the 'blocks' would be somehow connected and if in one of them would occur more errors, we could use this connection to see that something is wrong and use some unused redundancy from succeeding blocks to correct it  we use the assumption that according to error distribution, the succeeding blocks are with large probability correct. We have huge freedom while choosing ANS parameters to get closer to the assumed probability model of error distribution ... to the perfect data correction coder. 
Data correction methods resistant to pessimistic cases
Duda Jarek replied to Duda Jarek's topic in Computer Science
I've just realized that we can use huge freedom of choice for the functions for ANS to improve latency  we can make that if the forbidden symbol occurs, we are sure that if there was only single error, it was among bits used to decode this symbol. Maybe we will have to go back to the previous ones, but only if there were at least 2 errors among these bits  it's an order of magnitude less probable than previously. The other advantage is that if we would try to verify wrong correction by trying to decode further, single error in block will automatically tell us that it's wrong correction. There could be 2 errors, but they are much less probable, we can check it much later. The trick is that the forbidden symbol usually dominate in the coding tables, so we can make that if for given transferred bits we would get allowed symbol, for each sequence differ on one bit (Hamming distance 1) we would get the forbidden symbol. So to make the initialization, we choose some amounts of the allowed symbols and we have to place them somehow. For example: take unplaced symbol, place it in random unused position (using list of unused positions), and place the forbidden symbol on each state differing on one bit of 'some' last ones. This 'some' is a bit tricky  it has to work assuming that previously only allowed symbols were decoded, but it could be any of them. If we are not making compression  all of them are equally probable, this 'some' is lg(p_i) plus minus 1. Plus for high states, minus for low. There should remain some states unused after this procedure. We can fill them with forbidden symbols or continue above procedure, inserting more allowed symbols. This random initialization still leaves huge freedom of choice  we can still use it to additionally encrypt the data, using random generator initialized with the key. If want data correction only, we can use that in this procedure many forbidden symbols are marked a few times, the more the smaller the output file ... with a bit smaller but comparable safeness. So we could consciously choose some good schemes, maybe even that uses Hamming distance 2 (or grater)  to go back to previous symbol there would have to occur 3 errors. For example 4+3 scheme seems to be perfect: we transfer at average 7 bits, and for every allowed symbol there occurs 7 forbidden ones. For some high states like 111******** (stars are the transferred bits) we have to place 8 forbidden symbols, but for low like 10000****** we can place only six. Some of forbidden states will be marked a few time, so we should make whole procedure, eventually use a bit less amount allowed symbols (or more). 
Data correction methods resistant to pessimistic cases
Duda Jarek replied to Duda Jarek's topic in Computer Science
We can use ANS entropy coding property to make above process quicker and distribute redundancy really uniformly: to create easily recognizable pattern, instead of inserting '1' symbol regularly, we can add a new symbol  the forbidden one. If it occurs, we know that something was wrong, the nearer the more probable. Let say we use symbols with some probability distribution (p_i), so we at average need H = sum_i p_i lg p_i bits/symbol. For example if we want just to encode bytes without compression, we can threat it as 256 symbols with p_i=1/256 (H = 8 bits/symbol). Our new symbol will have some chosen probability q. The nearer to 1 it is, the larger redundancy density we add, the easier to correct errors. We have to rescale the rest of probabilities: p_i >(1q) p_i. In this way, the size of the file will increase r = (H  lg (1q) )/H times. Now if while decoding we get the forbidden symbol, we know that,  with probability q, the first uncorrected yet error has occurred in some of bits used to decode last symbol,  with probability (1q)q it occurred in bits used while decoding the previous symbol,  with probability (1q)^2 q ... The probability of succeeding cases drops exponentially, especially if (1q) is near 0. But the number of required tries also grows exponentially. But observe that for example all possible distributions of 5 errors in 50 bits is only about 2 millions  it should be checked in a moment. Let's compare it to two well known data correction methods: Hamming 4+3 (to store 4 bits we use additional 3 bits) and tripling each bit (1+2). Taking the simplest error distribution model  for each bit the probability that it is switched is constant, let say e=1/100. So the probability that in 7 bit block we have at least 2 errors, is 1  (1e)^7  7e(1e)^6 =~ 0.2% For 3 bit block it's about 0.03% So for each kilobyte of data we irreversibly loose: 4*4=16 bits in Hamming 4+3, 2.4 bits for tripling bits. We see that even for looking to be well protected methods, we loose a lot of data because of pessimistic cases. For ANS based data correction: 4+3 case (r=7/4)  we add forbidden symbol with probability q=11/2^3=7/8, and each of 2^4=16 symbols has probability 1/16*1/8=1/128. In practice ANS works best if lg(p_i) aren't natural numbers, so q should (not necessary) be not exactly 7/8 but something around. Now if the forbidden symbol occurs, with probability about 7/8 we only have to try to switch one of (about) 7 bits used to decode this symbol. With 8 times smaller probability we have to switch 7 bits from the previous one... with much smaller probability, depending on the error density model, we should try to switch some two bits ... and even extremely pessimistic cases looks to take reasonable time to correct them. For 1+2 case (r=3), the forbidden symbol has about 3/4, and 0,1 has 1/8 each. With probability 3/4 we have only to correct one of 3 bits ... 255/256 one of 12 bits ...  There is some problem  in practice coding/decoding tables should fit into cache, so we can use at most about million of states. While correcting trying thousands of combination, we could accidentally get the correct state with wrong correction  a few bits would be corrected in wrong way and we wouldn't even notice it. To prevent it we can for example use two similar stages of ANS  the first creates bytes and the second convert the first to the final sequence. The second would get uniformly distributed bytes, but ANS itself creates some small perturbations and it will work fine. Thanks of this the number of states grows to the square of initial one, reducing this probability a few orders of magnitude at the cost of double time requirements. We could use some checksum to confirm it ultimately. 
First approximation of free electron in conductor can be a plane wave. So shouldn't there be more analogies from optics? Remember that single electron can go through two slits at the same time... Photons interact with local matter (electron/photons) which results (in first approximation) in complex coefficient (n)  refractive index. It's imaginary part describes absorption  corresponds to resistance for conductor. It's real part corresponds to phase velocity/wavelength, is there analogy in free electron behavior? Different conductors have different local structure, electron distributions etc.  so maybe they have a difference in refraction index... If yes, there should be more effects from optics, like partial internal reflection, interferences ... we could use in practice. I know  electrons unlike photons interact with each other  so electron waves should quickly loose it's coherence. But maybe we could use such quantum effects on short distance in crystals? Or maybe in one dimension  imagine for example long (CH=CHCH=CH ...) molecule. It's free electrons should behave like onedimensional plane wave. Now exchange hydrogen to for example fluorine (CF=CF)  it still should be a good conductor, but the behavior of electrons should be somehow different ... shouldn't it have different refraction index? If yes, for example (CF=CH) should have intermediate... What for? Imagine for example something like antireflective coating from optics: http://en.wikipedia.org/wiki/Antireflective_coating Let say: thick layer of higher refractive index material and thin of lower. The destructive interference in thin layer happen only from the antireflective side (thin layer)  shouldn't it reflect a smaller amount of photons/electrons than from the second side? If we choose reflective layer for dominant thermal energy of photons/electrons, shouldn't it spontaneously create gradient of densities? For example to change heat energy into electricity...

Isn't twoway mirror Maxwell's demon for photons?
Duda Jarek replied to Duda Jarek's topic in Physics
Maxwell's demon is something that creates spontaniously ('from nothing') gradient of temperature/pressure/concentration  reducing entropy. It doesn't have to be perfect: if one side of the mirror would just a bit more likely reflect photons  it will enforce pressure gradient. The slightest pressure gradient it would spontaneously create can be used to create work (from energy stored in heat). For example we could connect both parts to constantly equilibrate their pressure. Through this connection would dominate direction from higher to lower pressure, which we can use to create work (from heat)  for example placing there something like water wheel but made of mirrors.  I completely agree that we usually don't observe entropy reductions, but maybe it's because such reductions has usually extremely low efficiency, so they are usually just imperceptible, shadowed by general entropy increase... ? 2nd law is statistical mathematical property of model with assumed physics. But it was proven for extremely simplified models! And still for such simplified models was used approximation  while introducing functions like pressure, temperature we automatically forget about microscopic correlations  it's mean field approximation. Maybe these ignored small scale interactions could be use to reduce entropy... For example thermodynamics assumes that energy quickly equilibrate with environment ... but we have eg.ATP, which stores own energy in much more stable form then surrounding molecules, be converted into work...  I apologies for the twoway mirror example, I generally feel convinced now, that they work only because the difference in amount of light  the effect while looking on dark glasses could be explained for example by their curvature. When I was thinking about it, I had a picture of destructive interference from antireflective coating. But let's look at such coating... http://en.wikipedia.org/wiki/Antireflective_coating Let say: thick layer of higher refractive index material and thin of lower. The destructive interference in thin layer happen only from antireflective side (thin layer)  shouldn't it reflect a bit smaller amount of photons than the second side? ... create gradient of pressure in photon containment  reducing entropy.