Jump to content

Duda Jarek

Senior Members
  • Posts

    572
  • Joined

  • Last visited

Posts posted by Duda Jarek

  1. While thinking about random walk on a graph, standard approach is that every possible edge is equally probable - kind of maximizing local entropy. There is new approach (MERW) - which maximizes global entropy (of paths) - for each two vertexes, each path of given length is equally probable. For a regular graph it gives the same, but usually they are different - it MERW we get some localizations, not known in standard random walk.

    It was derived in http://www.arxiv.org/abs/0710.3861 in the context of optimal encoding. In http://www.arxiv.org/abs/0810.4113 are analyzed it's localization properties. It can also suggest the nature of quantum physics ( http://www.advancedphysics.org/forum/showthread.php?p=47998 ) .

     

    In the second paper is also introduced some nice inequality for dominant eigenvalue ([math]\lambda[/math]) of a symmetric real matrix [math]M[/math] with nonegative terms, I'll write it in full generality:

     

    [math]\forall_{n>0}\qquad\lg(\lambda)\geq \frac{1}{n}\frac{\sum_i k_{ni} \lg(k_{ni})}{\sum_i k_{ni}}[/math]

     

    where [math]k_{ni}:=\sum_j (M^n)_{ij}[/math]

     

    In 0/1 matrix and n=1 case it's just the comparison between the entropies of both random walks. To generalize it to other symmetric matrices with nonnegative terms, we have observe in the case with potential ([math] M_{ij}=e^{-V_{ij}}[/math]), we have optimized not average entropy, but so called (average) free energy - the inequality is the result of that

     

    [math]\max_p\ \left(-\sum_i p_i \ln(p_i)-\sum_i E_i p_i\ \right) = \ln\left(\sum_i e^{-E_i}\right)\quad \left(=\ln(\lambda)=-F\right)[/math]

    and the maximum is fulfilled for [math] p_i \sim e^{-E_i}[/math].

     

    Finally to get the equation above, we have to get [math] M^n [/math] instead of [math]M[/math].

     

    This inequality is much stronger than this type of inequalities I know and quickly gives quite good below approximation of the dominant eigenvalue.

    Have You met with this/similar inequality?

    How to prove it straightforward (not using sequence interpretation) ?

  2. I'm sorry - I didn't realized it's supported. Here are the main equations again:

    Assuming Bolzman distribution among paths: that the probability of a path is proportional to

    exp(- integral of potential over the path)

    gives propagator

    [math]K(x,y,t)=\frac{<x|e^{-t\hat{H}}|y>}{e^{-tE_0}}\frac{\psi(y)}{\psi(x)}[/math]

    where [math]\hat{H}=-\frac{1}{2}\Delta+V[/math], [math]E_0[/math] is the ground (smallest) energy, [math]\psi[/math] is corresponding eigenfunction (which should be real positive).

    Propagator fulfills [math]\int K(x,y,t)dy=1,\quad \int K(x,y,t)K(y,z,s)dy=K(x,z,t+s) [/math]

    and have stationary probability distribution [math]\rho(x)=\psi^2(x)[/math]

    [math]\int \rho(x)K(x,y,t)dx = \rho(y) [/math]

     

    ---

     

    To summarize, we can interpret physics:

    - locally (in time) - in given moment particle chooses behavior according to situation in this moment - standard approach, or

    - globally (in time) - interaction is between trajectories of particles in four dimensional spacetime.

     

    In the local interpretation the timespace is being slowly created while time passes, in the global we go along time dimension of more or less created timespace.

     

    In local interpretation particles in fact uses the whole history (stored in fields) to choose behavior.

    If according to it, we would assume that the probability distribution among paths ending in given moment is given by exp(- integral of potential over the path) we would get that the probability distribution of finding the particle is [math]\rho(x)\cong\psi(x)[/math].

    To get the square, paths cannot finish in this moment, but have to go to the future - their entanglement in both past and future have to influence the behavior in given moment.

    The other argument for that both past and future is important to choose behavior, is that we rather believe in CPT conservation, which switches past and future.

     

    Observe also that in this global interpretation, two-slits experiment is kind of intuitive - the particle is generally smeared trajectory, but can split for some finite time and have a tendency to join again (collapse).

    For example because in split form it has higher energy.

  3. Quantum physics says that really atoms should approach their ground state (p(x)~psi^2(x)), as in my model. Single atom does it by emitting energy in portions of light. But from the point of view of statistical physics, if there were many of them their average probability distribution should behave locally more or less like in my propagator.

     

    Particle doesn't behave locally only - they don't use other particle's positions only to choose what to do in given moment, but use also their histories - stored in fields (electromagnetic, fermionic ...).

    So it would suggest that the statistics should be made among paths ending in given moment, but it would give stationary probability distribution p(x)~psi(x).

    To get the square, paths should go into the future also.

     

    Intuitively - I'm starting to think that particles should be imagined as one-dimensional paths in four-dimensional timespace. The are not ending in given moment and being slowly created further, but they are already somehow entanglement in the future - it just comes from the statistics...

    So the time passing we feel is only going through time dimension of some four-dimensional construct with strong boundary conditions (bigbang)... ?

  4. I've just corrected in (0710.3861) the derivation of the equation for propagator - probability density of finding a particle in position y after time t, which started in position x:

     

    K(x,y,t)=<x | e^-{tH} |y> / e^{-tE_0} * psi(y)/psi(x)

     

    where psi is the ground state of H with energy E_0.

    At first look it's a bit similar to the Feynman-Kac equation:

     

    K_FK (x,y,t)= < x |e^-{tH} | y >

     

    The difference is that in FK the particle decays - the potential says the decay rate. After infinity time it will completely vanish.

    In the first model the particle doesn't decay (\int K(x,y,t)dy=1), but approach to stationary distribution:

    p(x)=psi^2(x)

    The potential defines that the probability of going through given path is proportional to

    e^{-integral of potential over this path).

     

    The question if the physics is local or global seems to be deeper than I thought.

    Statistical physics would say that this distribution of paths should really looks like that ... but to achieve this distribution, the particle should see all paths - behave globally.

    Statistical physics would also say that the probability that a particle would be in a given place, should behave like p(x) ~ e^{-V(x)}.

    We would get this distribution for GRW like model - with choosing behavior locally...

     

    Fulfilling statistical mechanics globally (p(x)=psi^2(x)) would create some localizations, not met in models assuming fulfilling statistical mechanics locally (p(x) ~ e^{-V(x)}).

     

    Have You met with such localizations?

    Is physics local or global?

  5. To argument that MERW corresponds to the physics better, let's see that it's scale-free. GRW chooses some time scale - corresponding to one jump.

     

    Observe that all equations for GRW works not only for 0/1 matrices, but also for all symmetric ones with nonnegative terms.

    k_i=\sum_j M_ij (with diagonals)

    We could use it for example on M^2 to construct GRW for time scale twice larger. But there would be no straight correspondence between these two GRWs.

    MERW for M^2 would is just the square of MERW for M - like in physics, no time scale is emphasised.

    P^t_ij= (M^t_ij / lambda^t) psi(j)/psi(i)

  6. While thinking about random walk on a graph, standard approach is that every possible edge is equally probable - kind of maximizing local entropy.

    There is new approach (MERW) - which maximizes global entropy (of paths) - for each two vertexes, each path of given length between them is equally probable.

    For regular graph they give the same, but generally they are different - in MERW we get some localizations, not met in the standard random walk:

    http://arxiv.org/abs/0810.4113

     

    This approach can be generalized to random walk with some potential - something like discretized euclidean path integrals.

    Now taking infinitesimal limit, we get that

    p(x) = psi^2(x)

    where psi is normalized eigenfunction corresponding to the ground state (E_0) of corresponding Hamiltonian H=-1/2 laplacian + V.

    This equation is known - can be got instantaneously from Feynman-Kac equation.

     

    But we get also analytic formula for the propagator:

    K(x,y,t)=(<x|e^{-2tH}|y>/e^{-2E_0}) * psi(y)/psi(x)

    Usually we variate paths around the classical one getting some approximation - I didn't met with not approximated equations this type (?)

    In the second section is the derivation:

    http://arxiv.org/abs/0710.3861

    Bravely we could say that thanks of analytic continuation, we could use imaginary time, and we get solution to standard path integrals ?

     

    Have You heard about this last equation?

    Is physics local - particles decide locally, or global - they see the space of all trajectories and choose with some probability... ?

     

    Ok - it was to be rhetoric question. Physicist should (?) answer, that the key is the interference - microscopically is local, than interfere with itself, environment ... and for example it looks like a photon would go around negative refractive index material...

     

    I wanted to emphasize, that this question has to be deeply understand ... especially while trying to discretize physics, for example: which random walk corresponds to the physics better?

    It looks like that to behave like in MERW, the particle would have to 'see' all possible trajectories ... but maybe it could be the result of macroscopic time step?

    Remember that an edge of such graph corresponds to infinitely many paths ...

     

    To translate this question into lattice field theories, we should also think how does discrete laplacian really should look like...?

  7. I've just realized that Hamming, tripling bits are some special (degenerated) cases of ANS based data correction :)

    In the previous post I gave arguments that it would be beneficial if any two allowed states would have Hamming distance at least 2.

    If we would make that this distance is at least 3, we could unambiguously instantly correct single error as in Hamming.

     

    To get tripling bit from ANS we use:

    states from 1000 to 1111

    Symbol '0' is in 1000 state, symbol '1' is in 1111 (Hamming distance is 3) and the rest six states have the forbidden symbol.

    We have only 1 appearance of each allowed symbol, so after decoding it, before bit transfer the number of state will always drop to '1' and three youngest bits will be transferred from input.

     

    To get Hamming 4+3,

    states are from 10000000 to 11111111

    We have 16 allowed symbols from '0000' to '1111', each one has exactly one appearance - the state 1*******, where stars are 7 bits it would be coded in Hamming - two different has Hamming distance at least 3.

    After coding the state drops to '1' again and this '1' will be the oldest bit after bit transfer.

     

    The fact that each allowed symbol has only one appearance, makes that after decoding we each time drops to '1' - it's kind of degenerated case - all blocks are independent, we don't transfer any redundancy.

    It can handle with great error density, like 1/7 (for Hamming 4+3) ... but only while in each block is at most 1 error.

    In practice errors doesn't come in such regularity and even with much smaller error density, Hamming looses a lot of data (like 16 bits per kilobyte for 0.01 error probability).

     

    Let's think about theoretical limit of bits of redundancy we have to add for bit of information for assumed statistical error distribution to be able to full correct the file.

    To find this threshold, let's think about simpler looking question: how many information is stored in such uncertain bit?

    Let's take the simplest error distribution model - for each bit probability that it's switched is equal e (near zero), so if we see '1' we know that with probability 1-e it's really '1', and with probability e it's 0.

    So if we would know which of this cases we have, what is worth

    h(e)=-e lg(e) - (1-e) lg(1-e),

    we would have whole bit.

    So such uncertain bit is worth 1-h(e) bits.

    So to transfer n real bits, we have to use at least n/(1-h(e)) these uncertain bits - the theoretical limit to be able to read a message is (asymptotically)

    h(e)/(1-h(e)) additional bits of redundancy /bit of information.

     

    So a perfect data correction coder for e=1/100 error probability, would need only additional 0.088 bits/bit to be able to restore message.

    Hamming 4+3 instead of using additional 0.75 bits/bit, looses 16bits/kilobyte with the same error distribution.

     

    Hamming assumes that every 7bit block can come in 8 ways - correct or with changed one of 7 bits.

    It uses the same amount of information to encode each of them, so it would add at least lg(8 )=3 bits of redundancy in each block - we see it's done optimally...

    ... but only if the probability of all of this 8 ways would be equal for this error distribution...

    In practice the most probably we would have the possibility without error, later with one error ... and with much smaller possibilities with more errors ... depending how does error distribution in our medium looks like.

     

    To go into the direction of the perfect error correction coder, we have to break with uniform distribution of cases like in Hamming and try to correspond to real error distribution probabilities.

     

    If the intermediate state for ANS based data correction could have many values, we would transfer some redundancy - the 'blocks' would be somehow connected and if in one of them would occur more errors, we could use this connection to see that something is wrong and use some unused redundancy from succeeding blocks to correct it - we use the assumption that according to error distribution, the succeeding blocks are with large probability correct.

    We have huge freedom while choosing ANS parameters to get closer to the assumed probability model of error distribution ... to the perfect data correction coder.

  8. I've just realized that we can use huge freedom of choice for the functions for ANS to improve latency - we can make that if the forbidden symbol occurs, we are sure that if there was only single error, it was among bits used to decode this symbol.

    Maybe we will have to go back to the previous ones, but only if there were at least 2 errors among these bits - it's an order of magnitude less probable than previously.

    The other advantage is that if we would try to verify wrong correction by trying to decode further, single error in block will automatically tell us that it's wrong correction. There could be 2 errors, but they are much less probable, we can check it much later.

     

    The trick is that the forbidden symbol usually dominate in the coding tables, so we can make that if for given transferred bits we would get allowed symbol, for each sequence differ on one bit (Hamming distance 1) we would get the forbidden symbol.

     

    So to make the initialization, we choose some amounts of the allowed symbols and we have to place them somehow.

    For example: take unplaced symbol, place it in random unused position (using list of unused positions), and place the forbidden symbol on each state differing on one bit of 'some' last ones.

    This 'some' is a bit tricky - it has to work assuming that previously only allowed symbols were decoded, but it could be any of them.

    If we are not making compression - all of them are equally probable, this 'some' is -lg(p_i) plus minus 1. Plus for high states, minus for low.

     

    There should remain some states unused after this procedure. We can fill them with forbidden symbols or continue above procedure, inserting more allowed symbols.

    This random initialization still leaves huge freedom of choice - we can still use it to additionally encrypt the data, using random generator initialized with the key.

    If want data correction only, we can use that in this procedure many forbidden symbols are marked a few times, the more the smaller the output file ... with a bit smaller but comparable safeness.

    So we could consciously choose some good schemes, maybe even that uses Hamming distance 2 (or grater) - to go back to previous symbol there would have to occur 3 errors.

     

    For example 4+3 scheme seems to be perfect: we transfer at average 7 bits, and for every allowed symbol there occurs 7 forbidden ones.

    For some high states like 111******** (stars are the transferred bits) we have to place 8 forbidden symbols, but for low like 10000****** we can place only six.

    Some of forbidden states will be marked a few time, so we should make whole procedure, eventually use a bit less amount allowed symbols (or more).

  9. We can use ANS entropy coding property to make above process quicker and distribute redundancy really uniformly:

    to create easily recognizable pattern, instead of inserting '1' symbol regularly, we can add a new symbol - the forbidden one.

    If it occurs, we know that something was wrong, the nearer the more probable.

     

    Let say we use symbols with some probability distribution (p_i), so we at average need H = -sum_i p_i lg p_i bits/symbol.

    For example if we want just to encode bytes without compression, we can threat it as 256 symbols with p_i=1/256 (H = 8 bits/symbol).

     

    Our new symbol will have some chosen probability q. The nearer to 1 it is, the larger redundancy density we add, the easier to correct errors.

    We have to rescale the rest of probabilities: p_i ->(1-q) p_i.

    In this way, the size of the file will increase r = (H - lg (1-q) )/H times.

     

    Now if while decoding we get the forbidden symbol, we know that,

    - with probability q, the first uncorrected yet error has occurred in some of bits used to decode last symbol,

    - with probability (1-q)q it occurred in bits used while decoding the previous symbol,

    - with probability (1-q)^2 q ...

     

    The probability of succeeding cases drops exponentially, especially if (1-q) is near 0.

    But the number of required tries also grows exponentially.

    But observe that for example all possible distributions of 5 errors in 50 bits is only about 2 millions - it should be checked in a moment.

     

    Let's compare it to two well known data correction methods: Hamming 4+3 (to store 4 bits we use additional 3 bits) and tripling each bit (1+2).

    Taking the simplest error distribution model - for each bit the probability that it is switched is constant, let say e=1/100.

    So the probability that in 7 bit block we have at least 2 errors, is

    1 - (1-e)^7 - 7e(1-e)^6 =~ 0.2%

    For 3 bit block it's about 0.03%

    So for each kilobyte of data we irreversibly loose: 4*4=16 bits in Hamming 4+3, 2.4 bits for tripling bits.

    We see that even for looking to be well protected methods, we loose a lot of

    data because of pessimistic cases.

     

    For ANS based data correction:

    4+3 case (r=7/4) - we add forbidden symbol with probability q=1-1/2^3=7/8, and each of 2^4=16 symbols has probability 1/16*1/8=1/128.

    In practice ANS works best if lg(p_i) aren't natural numbers, so q should (not necessary) be not exactly 7/8 but something around.

    Now if the forbidden symbol occurs, with probability about 7/8 we only have to try to switch one of (about) 7 bits used to decode this symbol.

    With 8 times smaller probability we have to switch 7 bits from the previous one... with much smaller probability, depending on the error density model, we should try to switch some two bits ... and even extremely pessimistic cases looks to take reasonable time to correct them.

    For 1+2 case (r=3), the forbidden symbol has about 3/4, and 0,1 has 1/8 each.

    With probability 3/4 we have only to correct one of 3 bits ... 255/256 one of 12 bits ...

     

    -----

     

    There is some problem - in practice coding/decoding tables should fit into cache, so we can use at most about million of states.

    While correcting trying thousands of combination, we could accidentally get the correct state with wrong correction - a few bits would be corrected in wrong way and we wouldn't even notice it.

    To prevent it we can for example use two similar stages of ANS - the first creates bytes and the second convert the first to the final sequence.

    The second would get uniformly distributed bytes, but ANS itself creates some small perturbations and it will work fine.

    Thanks of this the number of states grows to the square of initial one, reducing this probability a few orders of magnitude at the cost of double time requirements.

    We could use some checksum to confirm it ultimately.

  10. First approximation of free electron in conductor can be a plane wave.

    So shouldn't there be more analogies from optics?

    Remember that single electron can go through two slits at the same time...

     

    Photons interact with local matter (electron/photons) which results (in first approximation) in complex coefficient (n) - refractive index.

    It's imaginary part describes absorption - corresponds to resistance for conductor.

    It's real part corresponds to phase velocity/wavelength, is there analogy in free electron behavior?

     

    Different conductors have different local structure, electron distributions etc. - so maybe they have a difference in refraction index...

    If yes, there should be more effects from optics, like partial internal reflection, interferences ... we could use in practice.

    I know - electrons unlike photons interact with each other - so electron waves should quickly loose it's coherence.

    But maybe we could use such quantum effects on short distance in crystals?

     

    Or maybe in one dimension - imagine for example long (-CH=CH-CH=CH- ...) molecule.

    It's free electrons should behave like one-dimensional plane wave.

    Now exchange hydrogen to for example fluorine (-CF=CF-) - it still should be a good conductor, but the behavior of electrons should be somehow different ... shouldn't it have different refraction index?

    If yes, for example (-CF=CH-) should have intermediate...

     

    What for?

    Imagine for example something like anti-reflective coating from optics:

    http://en.wikipedia.org/wiki/Anti-reflective_coating

    Let say: thick layer of higher refractive index material and thin of lower.

    The destructive interference in thin layer happen only from the anti-reflective side (thin layer) - shouldn't it reflect a smaller amount of photons/electrons than from the second side?

    If we choose reflective layer for dominant thermal energy of photons/electrons, shouldn't it spontaneously create gradient of densities?

    For example to change heat energy into electricity...

  11. Maxwell's demon is something that creates spontaniously ('from nothing') gradient of temperature/pressure/concentration - reducing entropy.

    It doesn't have to be perfect: if one side of the mirror would just a bit more likely reflect photons - it will enforce pressure gradient.

     

    The slightest pressure gradient it would spontaneously create can be used to create work (from energy stored in heat).

    For example we could connect both parts to constantly equilibrate their pressure.

    Through this connection would dominate direction from higher to lower pressure, which we can use to create work (from heat) - for example placing there something like water wheel but made of mirrors.

     

    -----

     

    I completely agree that we usually don't observe entropy reductions, but maybe it's because such reductions has usually extremely low efficiency, so they are usually just imperceptible, shadowed by general entropy increase... ?

     

    2nd law is statistical mathematical property of model with assumed physics.

    But it was proven for extremely simplified models!

    And still for such simplified models was used approximation - while introducing functions like pressure, temperature we automatically forget about microscopic correlations - it's mean field approximation.

    Maybe these ignored small scale interactions could be use to reduce entropy...

    For example thermodynamics assumes that energy quickly equilibrate with environment ... but we have eg.ATP, which stores own energy in much more stable form then surrounding molecules, be converted into work...

     

    ------------------------------------------

     

    I apologies for the two-way mirror example, I generally feel convinced now, that they work only because the difference in amount of light - the effect while looking on dark glasses could be explained for example by their curvature.

    When I was thinking about it, I had a picture of destructive interference from anti-reflective coating.

     

    But let's look at such coating...

    http://en.wikipedia.org/wiki/Anti-reflective_coating

    Let say: thick layer of higher refractive index material and thin of lower.

    The destructive interference in thin layer happen only from anti-reflective side (thin layer) - shouldn't it reflect a bit smaller amount of photons than the second side? ... create gradient of pressure in photon containment - reducing entropy.

  12. Everybody has seen two-way mirror - transparent from one side, reflective from the second ... isn't that Maxwell's demon for photons?

    Ok - it's not perfect - it absorbs some photons increasing own heat

    and emits thermal photons - so it can stay in thermal equilibrium with

    environment.

     

    Let's take a container for photons (covered with mirrors), now place

    two-way mirror in thermal equilibrium with photon gas inside, dividing

    container into two parts.

    The density of photons on the reflective side should be larger than on

    the second - so it would reduce entropy?

  13. I was thinking about 2nd law of thermodynamics and crystallization.

    During this process we get higher ordering (lower entropy), but the cost is energy difference between free and bind molecule - this energy is usually just dispersed around, increasing general temperature.

    But what if we wouldn't allow this energy to run away randomly ... for example storing it in chemical energy of some molecule, like ATP ...

     

    That lead me to mechanisms that could allow organisms to feed directly with heat (not using thermal infrared):

     

    Let say that we have two molecules(A,B) which has larger total energy separated(E1) than when they are bind (E2<E1).

    Additionally there is energy barrier between these states.

    Now when they are bind in solution, their thermal energy statistically sometimes exceed the barrier, and they split (reducing temperature!).

    But to bind them back, they not only have to reach the barrier, but they have also to find each other in the solution - it's not very likely, so statistically concentration of AB is relatively small comparing to concentration of separated molecules.

     

    Now we will need a catalyst which reduce the barrier, but then use the energy difference for example to bind ADP and phosphate.

    For example it catches all required molecules and uses energy stored in own structure to take A and B closer, to make them reach the top of the barrier, then use energy they produce to bind ADP + P and restore own energy.

     

    I know - this enzyme would work in both directions, but concentration of AB should be small, such that the wanted direction should dominate.

    Is here any problem?

  14. I see how to make the required nanodiodes for nanoantennas for thermal photons - they should use that after absorbing a photon, the electron will be excited and will slowly equalize this additional energy with its environment.

    So if we place something which need high energy electron nearer one side of antenna, it's more likely that electron jump through this threshold.

     

    So the whole electricity generator should look like:

    -conductor-threshold-antenna-conductor-threshold-

    and electrons will more likely go left.

    If the antennas are printed, above threshold could be just narrowing.

  15. When I've met with a heat to sound article, it was written that it needs pure heat ... but when I've read physorg article I've linked - I've finally seen that it uses gradient of temperature...

     

    But what about nanoantennas?

    They use heat energy - thermal infrared to enforce movement of electrons.

    The problem is if we can change it into their regular movement - we would need diodes which would be something like Maxwell's demon for electron...

    I think that it's possible, because temperature describes average temperature of molecules. But their electrons have completely different behavior - are much faster, have different energies, move along scaffolding made of molecules ...

    There are two different thermodynamics there! Of course there are correspondences/interactions between them, but there is also some independence we may be able to use... ?

     

    Simple counterexample to 2nd law using thermal photons:

     

    Imagine empty tube, which internal surface is covered with perfect mirror. Now near it's one end place two separators - reflective on the end of the tube and transparent to its middle.

    Place hot gas between the separators. It's isolated thermally, but it produce thermal photons. The only way photon can escape is through the second end of the tube, so it would work as jet engine - because photons have momentum in one side, the tube has to get momentum into the second. And we have stream of photons we can use to create work somewhere else.

     

    Above example uses that despite that kinetic energy of molecules behave randomly, each one has specific movement/oscillation, which energy can be changed into ordered one - electromagnetic oscillation of photon.

    You will say that the problem is with perfect mirrors, but they are just a perfect isolator for thermodynamics of photons.

  16. I was recently interested by some news that it's possible to drain energy from pure heat. I've read about two ways: use sound resonator or absorb infrared thermal radiation:

    http://www.physorg.com/news100141616.html

    http://www.physorg.com/news137648388.html

    Other problem is for example that while spontaneous crystallization entropy goes in 'forbidden' direction:

    http://www.garai-research.com/research%20statement/Entropy/Entropy.htm

    ...

     

    It would be nice to localize simplifications of looking to be such general theory like thermodynamics.

    One way of their reasons can be simplifying physics for thermodynamical model, like

    - it corresponds to molecules, while we can say that their electrons live in completely different world - on a scaffolding made of molecules. Their energies doesn't correspond straightforwardly,

    - thermodynamics usually ignores thermal radiation and it's energy.

     

    But maybe there are deeper problems - thermodynamics usually ignores internal structure - for example from two states of the same energy one can be easier accesable...

     

    What do You think about it?

  17. Standard approach to fight with viruses is to use antigens which search for some specific place on the surface, but the problem is that the capsid is varying rapidly.

    What usually doesn't change is that the virus still targets to the same molecules on cell's surface - maybe we should try to use it.

     

    For example create empty liposome - water + phospholipid with specific molecules - for example CD4 and some chemokine receptors for HIV.

    Now if the virus would catch the bait, it will enter inside and loose its capsid - even if the liposome will be destroyed - it shouldn't longer be a threat or at least much smaller than it would be swimming in capsid.

    Eventually we could add inside for example reverse transcriptaze inhibitor or some RNA cutting enzyme.

     

    Imagine such stealth liposom with CD4 - it should swim through veins for a few hours catching viruses, than be consumed with it's content by the immune system - perfect scenario.

    And remember that every HIV virus has some version of gp120 - should catch the bait...

     

    Update:

    I was just told on a different forum, that research on something similar - using erythrocytes instead of lyposomes, is already in progress:

    http://www.thescienceforum.com/viewtopic.php?p=140400

  18. About vibration absorption ... myosin was only example - it's functions are too directed, too complicated to be reversed in practice.

    But imagine a protein which is connected to cytoskeleton (for example on crossings of filaments) and catches ADP and phosphate. Now if the cell vibrates, movement of the cytoskeleton is transferred to the protein which can enforce binding the molecules into ATP.

    I'm not saying that it's simple, but it looks to be possible.

    And if yes, mother nature is extremely inventiveness creature :)

    Look how sophisticated machinery was constructed to use energy from light...

     

    About using heat - I agree that it looks even less probable...

    At the first spot it seems to be against classical thermodynamics - converting pure heat into different energy. But this theory is strong simplification. For example hot iron emits photon. Heat energy is random microscopic movement - a noise. The trick is to use a resonance to gather surrounding frequencies and convert them into coherent movement - light, sound ... Lately it was proved that it can be done - change heat into sound and then we can use for example piezoelectric effect to convert it into electricity:

    http://unews.utah.edu/p/?r=111907-2

    The question is if it can be done in microscopic level using proteins and temperatures smaller than 120C? For example a molecule which can resonance to bind ADP and phosphate.

    If yes - evolution should have found it...

     

    We have plenty of microbes in deep earth for billions of earth - there were/are some sources of chemical energy, but generally they are starving. Scientist has problem to explain their extremely low metabolism:

    http://www.sciencemag.org/cgi/content/full/sci;276/5313/703

    Extremely low metabolism has also psychrophiles - but it's because of cold - all reactions are slowed down. It's not because of lack of energy - they usually have access to it.

    We are talking about thermophiles , which should have consumed most of available chemical energy sources for last billions of years and new come extremely rarely.

    Remember that energy is needed not only for metabolism, reproduction ... it's necessary to sustain the structure of the organism, fight with increasing entropy - especially in high temperatures!

    Their life would be much easier if they would be able to feed not only with chemical energy, especially when there is plenty of it in heat and tectonic vibrations around...

  19. Biology has to offer many kinds of energy conversions - for example solar into ATP and later glucose. We can now take whole organisms and eg burn them to gain energy (biofuels).

    But remember where natural gases (and other fossil fuels) are from...

    Biology knows these metabolism pathway!

    Maybe we could take for example unicellular photosynthesizing organism and put into it genes of required proteins?

    Just to make it work, than take a few dozens(hundreds) of generations of artificial selection to create cheap, efficient(?) living solar panels, from which we could just pump eg. methane...

     

    About different kind of energies ... remember that in microscopic scale chemical reactions are reversible - the dominant direction depends of parameters (like ATPase H+).

    We know that we have mechanisms to produce heat using ATP. Now imagine that it has changed parameters to need more ATP density than there is in around - above some temperature, it should work in opposite direction - change ADP->ATP using heat!

    We have plenty of microbes kilometer below... what do they eat? Chemical energy of minerals? They should be about their minimum...

    Maybe they can feed with geothermal energy?

    To check it we should check if water with eg pyrolobus furmanii cool down faster than it should. If yes, a bit of artificial selection and maybe we could produce natural gas from surpluses of thermal energy in a factory.

     

    Another type of energy is vibration. Myosin can change ATP into movement. Again - with changed parameters, it should be able to work in opposite direction - if it would be attached to cytoskeleton, it should produce energy from vibrations.

    What for? For example to actively absorb them. For example to reduce turbulations in water... we should search for them in fishes, water mammals.

    Thanks of this we could produce active sound/vibration dampers, which produce energy...

  20. Ok - latency is not good side of the scheme I've presented - simple errors can be quickly corrected, but large ones may need a lot of time...

    There is also problem with loosing large block of data... using ANS it's a bit problematic, but actually we can start decoding again after it.

    Unfortunately we are of course loosing it's content.

    To protect against loosing whole packets scenario, we can for example - place first let say 100 bits as the first bit of 100 first packets, next 100 bits as the second one and so on... now we have to buffer these 100 packets before we can start decoding.

     

    By blocking, I meant placing information in completely independent (eg. 7bit in Hamming) blocks - thanks of it we can easily assure short, constant latency, but we cannot 'transfer the surpluses of redundancy' to cope with fluctuations of error density because each block has independent redundancy.

    I agree that because of various latency it's rather unpractical for telecommunication or memories but may be useful for example for archives, which just have to survive long time...

     

    And maybe there are possible faster methods which allows to such redundancy transfers?

    Thanks of this, we could use smaller amount of redundancy - not according to pessimistic density of errors, but only a bit above average density - it's usually a few orders of magnitude smaller...

  21. It'd require errors be scattered across multiple blocks...

    But it still can happen... and it's slow and maybe we could use less redundancy to achieve similar safeness...

     

    We are adding constant density of redundancy, but errors doesn't have to come with constant density - it can fluctuate - sometimes is above average, sometimes beyond.

    If are above, they can exceed safe amount that our redundancy can cope with.

    If is beyond - we've placed there more redundancy than it was required - we waste some capacity.

    I'm saying that we could transfer these surpluses to help with difficult cases!

     

    To do it we shouldn't separate information by placing it in blocks.

    It have to be one stream that can say that something has just been wrong - we don't see the pattern(redundancy) we've placed there - we have to try to fix neighborhood of this point until the pattern emerge again as it should.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.