Duda Jarek
Senior Members
Posts
572 
Joined

Last visited
Content Type
Profiles
Forums
Events
Everything posted by Duda Jarek

The paradox of Hawking radiation  is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
Before transforming into a black hole, it was a neutron star  I am asking about the starting moment of this transformation: when event horizon has just appeared in the center of the neutron star. Then it evolved to finally get out of its surface  from this moment we can call it a black hole. As radius of event horizon is proportional to mass inside, mass is proportional to density times third power of radius, density of matter in the moment of starting event horizon in the center had to reach infinity first. But if baryons are destructible, they should not survive this infinite compression  should be destroyed earlier, creating pressure inside and temporarily preventing the collapse ... 
The paradox of Hawking radiation  is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
Sure, Hawking radiation does not directly violate baryon number conservation, but only implies that destruction of baryons is possible. If so, it should start happening before getting to infinite density in the center of neutron star, what is required to start forming the event horizon ... 
The paradox of Hawking radiation  is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
Swansont, Huge amount of baryons form a star, which collapses ... and then "evaporates" into massless radiation. Lots of baryons in the beginning ... pooof ... none at the end  how it is not baryon destruction? Maybe they have just moved to an alternative dimension or something? MigL, so is baryon number ultimately conserved? Could there be created more baryons than antibaryons in baryogenesis? Can baryons "evaporate" through Hawking radiation? 
The paradox of Hawking radiation  is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
I am not asking about some specific theory, but the reality. Black hole evaporation requires that baryons are destructible, while formation of event horizon requires reaching infinite density in the center of neutron star  requires that matter can be infinitely compressed, without destruction of its baryons  contradiction. 
The paradox of Hawking radiation  is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
Ok, it is not exactly a paradox, but just a selfcontradiction: if baryons are not indestructible, they should be destroyed before reaching infinite density required to start forming the event horizon. 
The paradox of Hawking radiation  is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
Swansont, I don't know where exactly these baryons are destroyed. The fact is that initially there were baryons, and finally they are no longer  like in proton decay or baryogenesis, the baryon number is not conserved. About the "massless radiation", they usually expect some EM radiation (anyway, I don't know any massless baryons?) 
The hypothetical Hawking radiation means that a set of baryons can be finally transformed, "evaporate" into a massless radiation  that baryons can be destroyed. It requires that this matter was initially compressed into a black hole. If baryons can be destroyed in such extreme conditions, the natural question is: what is the minimal density/heat/pressure required for such baryon number violation? (or while hypothetical baryogensis  creating more baryons than antibaryons). While neutron star collapses into a black hole, event horizon grows continuously from a point in the center, like it this picture from: http://mathpages.com/rr/s702/702.htm As radius of event horizon is proportional to mass inside, the initial density of matter had to be infinity. So if baryons can be destroyed, it should happen before starting the formation of event horizon  releasing huge amounts of energy (complete mc^2)  pushing the core of collapsing star outward  preventing the collapse. And finally these enormous amounts of energy would leave the star, what could result in currently not understood gammaray bursts. So isn't it true that if Hawking radiation is possible, then baryons can be destroyed and so black holes shouldn't form? We usually consider black holes just through abstract stressenergy tensor, not asking what microscopically happens there  behind these enormous densities ... so in neutron star nuclei join into one huge nucleus, in hypothetical quark star nucleons join into one huge nucleon ... so what happens there when it collapses further? quarks join into one huge quark? and what then while going further toward infinite density in the central singularity of black hole, where light cones are directed toward the center? The mainly considered baryon number violation is the proton decay, which is required by many particle models. They cannot find it experimentally  in huge room temperature pools of water, but hypothetical baryogenesis and Hawking radiation suggest that maybe we should rather search for it in more extreme conditions? While charge/spin conservation can be seen that surrounding EM field (in any distance) guards these numbers through e.g. Gauss theorem, what mechanism guards baryon number conservation? If just a potential barrier, they should be destroyed in high enough temperature ... Is matter infinitely compressible? What happens with matter while compression into a black hole? Is baryon number ultimately conserved? If yes, why the Universe has more baryons than antibaryons? If not, where to search for it, expect such violation? If proton decay is possible, maybe we could induce it by some resonance, like lighting the proper gammas into the proper nuclei? (getting ultimate energy source: complete mass>energy conversion) Is/should be proton decay considered in neutron star models? Would it allow them to collapse to a black hole? Could it explain the not understood gammaray bursts?

If someone is interested in this subject, I have recently created presentation with new pictures, explanations: https://dl.dropboxusercontent.com/u/12405967/qrsem.pdf Picturelike QR codes is only one of many new possibilities of these extensions of KuznetsovTsybakov problem  when the receiver doesn't need to know the constrains (picture/music/noise characteristics/...), for example there are plenty of new steganograpic applications:  if we would like to encode information in exchanging the least significant bits (LSB) of a picture, but (to be more discrete) would like to change only the least possible amount of bits and the receiver doesn't know the original picture,  if we cannot just manipulate LSB because there is only e.g. 1bit/pixel, we can recreate local grayness by statistics of distribution like in the Lenalike codes above. We could also use more advances dithering methods: getting better picture quality, but at cost of reduced data capacity and algorithm will become more complex (probability for a pixel should depend on already chosen its neighbors). For example e.g. inkjet printers use just 3 colors of nearly identical dots  there are already hidden some basic informations like serial number there, but in a casually looking print we could hide really huge amount of information in precise dot positioning (would require a microscopic precision scanner to decode),  if there is a device producing some noise, we could send information in faking this noise  KuznetsovTsybakov would be required e.g. if the noise characteristics are varying in time,  there are also new possibilities of hiding information in sound, for example to reduce EM noise and target only those e.g. in given public place, some information for your smartphone could be hidden in music you hear ... ... What other applications could you think of?

I've recently realized that this method is extremely close to the base of lossy compression: rate distortion problem  they are kind of dual: while searching for the code, we just need to switch essential bits with the discarded ones. So the rate for rate distortion application is "1rate" of the original one (last version of http://arxiv.org/abs/1211.1572 ). For example to store distorted/halftone above plane pictures, we need correspondingly 1/8, 1/4 and 1/2 bits/pixel. Intuition is that while the bitmap requires e.g. 1 bit/pixel, such small part of bit stores the "visual aspect", while the rest can be used to store a message (but don't have to be). Correction Trees are perfect for such purposes  can cheaply work about 1% from the theoretical channel limit. Additionally, there appears probably new(?) applications:  extremely cheap storing of halftone pictures, like about 0.18 bits/pixel for Lena above (about 6kB),  dual version of Kuznetsov and Tsybakov problem (gets out of standard nomenclature of rate distortion)  we want to send n bits, but such that only k<n of them are fixed by us  the receiver don't know which, the rest of bits can be random  it occurs it is enough to send a bit more than k bits. Where could we use it  e.g. extremely cheap storing of halftone pictures? ps. Some more devaloped discussion: http://forums.devshed.com/devshedlounge26/embeddinggrayscalehalftonepicturesinqrcodes933694.html

What is this 512x512 picture? Bad Quality Lena indeed, but the trick is that it's black&white  can be directly seen as length 512x512=32kB bit sequence. It occurs that making bit sequence "looking like Lena" reduces the capacity only to about 0.822, what is about 26kB in this case  the visual aspect costs only about 6kB. It has rather too high resolution for practical applications, but here are examples of lower resolution codes looking like a chosen black and white picture: for example the central noisy code contains 800*3/4=600 bytes  making it look like the picture costs only 200 bytes. Here is fresh paper about obtaining it (generalization of Kuznetsov and Tsybakov problem  for constrains known to the sender only): http://arxiv.org/abs/1211.1572 What do you think of replacing today QR codes with nicer looking and more visually descriptive ones? What other applications could you think of for this new steganography for which two colors is finally enough?

How charge is distributed in nucleon, nucleus?
Duda Jarek replied to Duda Jarek's topic in Speculations
Thanks, yes indeed  I had this value in my mind and forgot that it included electron production ... The required positive charge inside neutron to make it stable occurs to be known: http://www.terra.es/personal/gsardin/news13.htm 
How charge is distributed in nucleon, nucleus?
Duda Jarek replied to Duda Jarek's topic in Speculations
More interesting nuclear physics questions (waiting for more ):  how shortrange strong interaction explains halo nuclei ( http://en.wikipedia.org/wiki/Halo_nucleus )? Like Lithium 11 (8.75ms halftime!), which accordingly to CERN article ( http://cerncourier.com/cws/article/cern/29077 ) comparing to Pb208 should look like that:  mass difference between tritium (3.0160492u) and helium 3 nucleus (3.0160293u) is only 18.6keV  how this decay can be beta decay if the difference between neutron and proton mass is 762keV? 
How charge is distributed in nucleon, nucleus?
Duda Jarek replied to Duda Jarek's topic in Speculations
alpha2cen, again the stability of deuteron is obvious in above picture (neutron requires charge for stability, so proton shares its own with neutron), but since they barely calculate single nucleon using lattice QCD, understanding deuteron stability there might be quite distant. 
How charge is distributed in nucleon, nucleus?
Duda Jarek replied to Duda Jarek's topic in Speculations
alpha2cen, while it is obvious from the picture above, it seems far nontrivial using QCD: I've just found 4 year old news about calculating nucleon mass on Blue Gene using lattice QCD: 936MeV for both +22 or 25MeV: http://physicsworld.com/cws/article/news/2008/nov/21/protonandneutronmassescalculatedfromfirstprinciples ps. please remove unnecessary quotes. 
Why neutron requires charge to become stable? Why against Coulomb attraction, nucleus require charge? How this charge is distributed inside nucleon (quarks)? Nucleus? Large nuclei are believed to behave like a liquid – are nucleons freely swimming there, or maybe they are somehow clustered, like in neutronproton(neutron) parts? I’m thinking of a topological soliton model of particles (concrete field structures for particles, with quantum numbers as topological charges), which doesn’t only restrict to mesons and baryons like the Skyrme model, but is an attempt to a more complete theory: which family of topological soltions would correspond to our whole particle menagerie. Extremely simple model seems to qualitatively fulfill these requirements: just real symmetric tensor field (like stressenergy tensor), but with Higgslike potential: preferring some set of eigenvalues (different) – it can be imagined as ellipsoid field: eigenvectors are axes, eigenvalues are radii. Now e.g. simplest charges are hedgehog configurations of one of three axes and topology says there is some additional spinlike singularity required (hairy ball theorem) – we get three families of leptons etc. This is all of nothing approach: a single discrepancy and it goes to trash. Instead it seems to bring succeeding simple answers, like for above questions: Basic structures there are vacuum analogues of Abrikosov vortex: curvelike structure around which (“quantum phase”) two axes makes e.g. pi rotation for ½ spin. The axis along the curve can be chosen in three ways – call them electron, muon or taon spin curve correspondingly. Now baryons would be the simplest knotted structures – like in the figure, two spin curves need to be of different type. The loop around enforces some rotation of the main axis of the internal curve – if it would be 180degrees, this axis would create hedgehoglike configuration, what corresponds to +1 charge. Locally however, such fractional rotation/charge may appear, but finally the sum has to be integer. This picture explains why charge is required for baryon stability, that for this purpose proton can share its charge with neutron. Lengthening the charge requires energy, what makes two nucleons attracting in deuteron  by this strong shortranged force. Here is fresh paper about this model: http://dl.dropbox.com/u/12405967/elfld.pdf To summarize, this simple model suggests that:  neutron has quadrupole moment (!),  nucleons cluster into “np“ or “npn“ parts sharing charge,  pairs of such clusters can couple, especially of the same type (stability of eveneven nuclei),  these clusters are parallel to the spin axis of nucleus(?). How would you answer to above questions? Do these suggestions sound reasonably? Can neutron have quadrupole moment? (Shouldn’t it have dipole or quadrupole moment in quark model?)

Generally, yes. It's important that these bit sequences are uncorrelated and with P(0)=P(1)=1/2 to make it contain maximum amount of information (it would be interesting to consider different cases...)  for example you ask if value is smaller or larger then median of given group. The 30bit length can be sometimes not enough  for simplicity assume you start with infinite one, then cut to required lengths  the average length is expected to be about lg(200)+1.33. So just encoding sequences would take about 200*(lg(200)+1.33) ... but you can subtract lg(200!) bits of information about their order.

Phillip, If for each box there is practically random sequence representing its individual features, to make it distinguishing it indeed has to be about lg(n) bit length in the perfect binary tree case. The randomness makes it a bit larger  this average length becomes D_n ~ lg(n) + 1.3327 So the total amount of information of this population would be nD_n ~ nlg(n) ... but we we don't care about the order of these sequences  all their orderings are in the same equivalence class, so the number of possibilities decreases n! times  the total amount of information is decreased by lg(n!) ~ nlg(n): H_n = nD_n  lg(n!) ~ 2.77544n The amount of information can be further reduced for degree 1 nodes: if all specimens from the population reaching given node, make the same next step (like all tall blonds are slim in given population)  for these nodes we don't need to store this direction, reducing informational content by 1 bit per each degree 1 node. While there are n leaves and n1 degree 2 nodes, it turns out that the expected number of degree 1 nodes is about lg(e/2)~0.44 So after such reduction, the population requires 2.3327 bits/specimen (calculated in last version of the paper).

3 features of 0/1 type can distinguish maximum 8 entities. The expected number of features required to distinguish individual (D_n bits) has to grow with approximately lg(n) More precisely: D_n ~ lg(n)  lg(e)/2n + 1.332746 But while calculating entropy of the whole population, we need subtract information about their order: H_n = nD_n  lg(n!) ~ n(lg(n)  lg(e)/2n + 1.332746)  nlg(n) + nlg(e)  lg(2pin)/2 ~ 2.77544 n where we've used Stirling formula. ps. I've just spotted that the amount of bits per element for the reduced minimal prefix tree seem to have the same decimal digits after coma as the 1.332746 above from the D_n formula. It suggests that it is 3/2 + gamma*lg(e) ~ 2.3327461772768672 bits/element Is it just a coincidence or maybe there is some nice and simple interpretation??

Once again, it's the lower boundary  we can approach it in computer science, but in biology I totally agree there is usually used much larger amount. From the other side ... DNA contains muuuch more more information, but the question is the perspective we look from  I'm talking about e.g. sociobiological level  of interactions ... entropy corresponds to complexity of the society. Important are not genotypes, but phenotypes  precisely: how other specimens react on them  does the behavior depend on which individual it is? For bacterias not really ... what are the simplest colonies which distinguish individuals in their population?

This is only a lower boundary  in reality there is used much more. But maybe there can be found a transition  for example ant's behavior doesn't rather depend from which specimen from its family it is interacting  the entropy/complexity of their society grows let say with log(n) ... Maybe there are some species where the distingctiguishabily starts and so the complexity start growing linearly with e.g. 2.77544 coefficient. If there would be e.g. essential ordering of individuals, entropy would grow faster: with log(n!)~nlog(n) If there would be essential interactions for different pairs, it would grow with n^2 If interactions within larger groups, even with 2^n... And remember that distinguishness is weaker than ordering: if you would just write these sequences of minimal distinguishing features, it would grow like H_n + lg(n!) = nD_n these 3 bits you are talking about is D_n  average number of distinguishing bits  the shortest unique prefix. You need to subtract the ordering information. Bits carries the most information (are the most distinguishing) if they have probability 0.5  so e.g. the first bit could say if someone is taller than median height...

Even individuality is devaluating in our times  to about 2.3327464 bits per element/specimen (thanks to James Dow Allen): http://groups.google.com/forum/#!topic/comp.compression/j7ieTXR14E But this is the final minimal price tag  it cannot be reduced further. Specifically, for the minimal prefix tree, a random sequence (representing individual features of a specimen) has about 0.721 probability of being identified as belonging to the population ... so if we are interested only in distinguishing inside the population, we can afford increasing this probability up to 1. To reduce the amount of information in the minimal prefix tree, let us observe that if there appears degree 1 node inside the tree, all sequences from the population going through that node will certainly go in the corresponding direction  we can save 1 bit about the information which exactly is this direction. In standard prefix tree these degree 1 nodes were the place where it could turn out that an outsider does not belong to the population  removing this information would raise false positive probability from 0.721 to 1. So if for sequences (0101.., 1010.., 1001..), the minimal prefix tree remembers (without ordering!): (0....., 101..., 100...), such reduced one remembers only (0....., 1.1..., 1.0...) What decreases its asymptotic cost from 2.77544 bits/specimen to about 2.332746.

Imagine there is a population/database/dictionary and we would like to distinguish its elements. So for each element, let us somehow encode its individual features (e.g. using a hash function) as a bit sequence  the most dense way is to use sequence of uncorrelated P(0)=P(1)=1/2 bits. We can now create minimal prefix tree required to distinguish these sequences, like in the figure below. For such ensemble of random trees of given number of leaves ([math]n[/math]), we can calculate Shannon entropy ([math]H_n[/math])  the amount of information it contains. It turns out that it asymptotically grows with at average 2.77544 bits per element [math](\frac{1}{2}+(1+\gamma)\lg(e))[/math]. The calculations can be found here: http://arxiv.org/abs/1206.4555 Is it the minimal cost of distinguishability/individuality? How to understand it? ps. related question: can anyone find [math]D(n)=\sum_{i=0}^{\infty} 1(12^{i})^n [/math] ? Clarification: The first thought about this distinctiveness is probably n! combinatorial term while increasing the size of the system, but n! is about the order and its logarithm grows faster than linearly. Distinctiveness is something much more subtle. It is well seen in equation (10) from the paper (which should like): [math]H_n+\lg(n!)=nD_n[/math] where [math]D_n[/math] is the average depth of leaves of such n leaf tree  average amount of bits distinguishing given element. So this equation says: distinctiveness/individualism + order = total amount of information distinctiveness grows linearly with n (2.77544 asymptotic linear coefficient) information about their ordering grows faster: [math]\lg(n!)\approx n\lg(n)n\lg(e)[/math].

In last week Nature Physics there is nice experiment about controlling in the future if photons in the past are entangled or not: Victor chooses later (QRNG) if AliceBob photons are correlated (left) or not (right): so obtaining R>R> means that more probably Victor has chosen entanglement, R>L> separation  there is nonzero mutual information, so once again I don't see a reason it couldn't be used to send information? Here is good informative article with link to the paper: http://arstechnica.c...beforehand.ars ps. If someone is anxious about the "conflict" of fundamental time/CPT symmetry with our 2nd lawbased intuition, it should be educative to look at very simple model: Kac ring  on a ring there are black and white balls which mutually shift one position each step. There are also some marked positions and when a ball goes through it, this ball switches color. Using natural statistical assumption ("Stoßzahlansatz"): that if there is p such markings (proportionally), p of both black and white balls will change the color this step, we can easily prove that it should leads to equal number of black and white balls  maximizing the entropy ... ... from the other side, after two complete rotations all balls have to return to the initial color  from 'all balls white' fully ordered state, it would return back to it ... so the entropy would first increase to maximum and then symmetrically decrease back to minimum. Here is a good paper with simulations about it: http://www.maths.usy...ts/kacring.pdf The lesson is that when on time/CPT symmetric fundamental physics we "prove" e.g. Boltzmann H theorem that entropy always grows ... we could take time symmetry transformation of this system and use the same "proof" to get that entropy always grows in backward direction  contradiction. The problem with such "profs" is that they always contain some very subtle uniformity assumption  generally called Stoßzahlansatz. If underlying physics is time/CPT symmetric, we just cannot be sure that entropy will always grow  like for Kac ring and maybe our universe also ...

Data correction methods resistant to pessimistic cases
Duda Jarek replied to Duda Jarek's topic in Computer Science
I apology for digging this thread up, but finally there is practical implementation and it beats modern state of arts methods in many application. It can be seen as greatly improved convolutional codelike concept – for example no longer using convolution, but carefully designed extremely fast operation allowing to work on much larger states instead. Other main improvements are using bidirectional decoding and heap (logarithmic complexity) instead of stubbornly used stack (linear complexity). For simplicity it will be called Correction Trees (CT). The most important improvement is that it can handle larger channel damage for the same rate. Adding redundancy to double (rate ½) or triple (rate 1/3) the message size theoretically should allow to completely repair up to correspondingly 11% or 17.4% damaged bits for Binary Symmetric Channel (each bit has independently this probability to be flipped). Unfortunately, this Shannon limit is rather unreachable  in practice we can only reduce Bit Error Rate (BER) if noise is significantly lower than this limit. Turbo Codes (TC) and Low Density Parity Check (LDPC) are nowadays seen as teh best methods – here is comparison of some their implementations with CT approach – output BER to input noise level: We can see that CT still repairs when the others has given up – making it perfect for extreme application like far space or underwater communication. Unfortunately repairing such extreme noises requires extreme resources – software implementation on modern PC decodes a few hundreds bytes per second for extreme noises. Additionally, using more resources the correction capability can be further improved (lowering line in the figure above). From the other side, CT encoding is extremely fast and correction for low noises is practically for free – like up to 56% for rate ½. In comparison, TC correction always requires a lot of calculation, while LDPC additionally requires also a lot of work for encoding only. So in opposite to them, CT is just perfect for e.g. hard discs – everyday work uses low noise, so using CT would make it extremely cheap. From the other hand, if it is really badly damaged, there is still correction possible but it becomes costly. Such correction itself could be also made outside, allowing for further improvement of correction capabilities. Paper: http://arxiv.org/abs/1204.5317 Implementation: https://indectproject.eu/correctiontrees/ Simulator: http://demonstrations.wolfram.com/CorrectionTrees/ 
You could also just absorb these photons/electrons with their angular momentum  I think it's doable experiment: Shoot with polarized electrons at object floating on surface of conductive liquid  it would absorb polarized electrons, then release unpolarized ones  so the difference of average angular momentum should make it rotate. I think you couldn't rotate it this way, but maybe I'm wrong ? My sarcasm was only about "universal answer" when physicists don't understand something: "it's quantum.". While quantum mechanism is kind of extension of classical one with the wave nature of particles and in first approximation (h=0) of QM you still get classical mechanics  quantum concepts are not unreachable for our minds, if only we stop basing on such imaginary limit of our understanding. We shouldn't just "shut up and calculate", but still try to understand e.g. dynamics behind wavefunction collapse, field configurations behind particles ... and many other important fields ignored because of "it's quantum" universal answer. Have a good weekend.