Jump to content

Duda Jarek

Senior Members
  • Posts

  • Joined

  • Last visited

Everything posted by Duda Jarek

  1. I don't know The discussion about Bell inequalities for solitons has evolved a bit here: http://www.sciforums.com/threads/do-nonlocal-entities-fulfill-assumptions-of-bell-theorem.153000/
  2. While dynamics of (classical) field theories is defined by (local) PDEs like wave equation (finite propagation speed), some fields allow for stable localized configurations: solitons. For example the simplest: sine-Gordon model, which can be realized by pendula on a rod which are connected by spring. While gravity prefers that pendula are "down", increasing angle by 2pi also means "down" - if these two different stable configurations (minima of potential) meet each other, there is required a soliton (called kink) corresponding to 2pi rotation, like here (the right one is moving - Lorentz contracted): Kinks are narrow, but there are also soltions filling the entire universe, like 2D vector field with (|v|^2-1)^2 potential - a hedgehog configuration is a soliton: all vectors point outside - these solitons are highly nonlocal entities. A similar example of nonlocal entities in "local" field theory are Couder's walking droplets: corpuscle coupled with a (nonlocal) wave - getting quantum-like effects: interference, tunneling, orbit quantization (thread http://www.scienceforums.net/topic/65504-how-quantum-is-wave-particle-duality-of-couders-walking-droplets/ ). The field depends on the entire history and affects the behavior of soliton or droplet. For example Noether theorem says that the entire field guards (among others) the angular momentum conservation - in EPR experiment the momentum conservation is kind of encoded in the entire field - in a very nonlocal way. So can we see real particles this way? The only counter-argument I have heard is the Bell theorem (?) But while soliton happen in local field theories (information propagates with finite speed), these models of particles: solitons/droplets are extremaly nonlocal entities. In contrast, Bell theorem assumes local entities - so does it apply to solitons?
  3. I was thinking about designing molecular descriptors for the virtual screening purpose: such that two molecules have similar shape if and only if their descriptors are similar. They could be used separately, or to complement e.g. some pharmacophore descriptors. They should be optimized for ligands - which are usually elongated and flat. Hence I thought to use the following approach: - normalize rotation (using principal component analysis), - describe bending - usually one coefficient is sufficient, - describe evolution of cross-section, for example as evolving ellipse Finally, the shape below is described by 8 real coefficients: length (1), bending (1) and 6 for evolution of ellipse in cross-section. It expresses bending and that this molecule is approximately circular on the left, and flat on the right: preprint: http://arxiv.org/pdf/1509.09211 slides: https://dl.dropboxusercontent.com/u/.../shape_sem.pdf Mathematica implementation: https://dl.dropboxusercontent.com/u/12405967/shape.nb Have you met something like that? Is it a reasonable approach? I am comparing it with USR (ultrafast shape recognition) and (rotationally invariant) spherical harmonics - have you seen other approaches of this type?
  4. Radiogenic heat is significant in Earth's internal heat budget ( http://en.wikipedia.org/wiki/Earth%27s_internal_heat_budget ) and its effect can be observed e.g. as high He3/He4 ratio from volcanoes and geysers: http://www.nature.com/nature/journal/v506/n7488/full/nature12992.html http://www.wired.com/2014/04/what-helium-can-tell-us-about-volcanoes/
  5. From the NASA article: "Until now, all ULXs were thought to be black holes. The new data from NuSTAR show at least one ULX, about 12 million light-years away in the galaxy Messier 82 (M82), is actually a pulsar. (...) Black holes do not pulse, but pulsars do." If as "a gravitational sink" you mean massive and small - sure. However, as I understand the NASA article, it is considered being a star: an macroscopic object made of matter, instead of a black hole: all matter being gathered in the central singularity. The assumption is that there is a rotating object made of matter, producing much more energy than we could explain (assuming baryon number conservation) - what is the source of this energy?
  6. Indeed the main question here is if the baryon number is ultimately conserved? Violation of this number is required by - hypothetical baryogenesis producing more matter than anti-matter, - many particle models, like supersymmetric, - massless Hawking radiation - black holes would have to evaporate with baryons to conserve the baryon number. From the other side, there is a fundamental reason to conserve e.g. electric charge: Gauss law says that electric field of the whole Universe guards charge conservation. In other words, adding a single charge would mean changing electric field of the whole Universe proportionally to 1/r^2. We don't have anything like that for baryon number (?) - a fundamental reason for conserving this number. Indeed the search for such violation (by proton decay) has failed, but this search was performed in room temperature water tanks. One of the question is if required conditions can be reached in such conditions: if energy required to cross the energy barrier holding the baryon together can be spontaneously generated in room-temperature water. In other words: if Boltzmann distribution of size of random fluctuations still behaves well for such huge energies. If baryon number is not ultimately conserved, it would rather require extreme conditions, like while Big Bang (baryogenesis) ... or in the center of neutron star, which will exceed all finite limits before getting to infinite density required to start forming the black hole horizon and the central singularity. Such "baryon burning phase" would result in enormous energy (nearly complete matter -> energy conversion) - and we observe this kind of sources, like gamma ray bursts, which "The means by which gamma-ray bursts convert energy into radiation remains poorly understood, and as of 2010 there was still no generally accepted model for how this process occurs (...) Particularly challenging is the need to explain the very high efficiencies that are inferred from some explosions: some gamma-ray bursts may convert as much as half (or more) of the explosion energy into gamma-rays." ( http://en.wikipedia.org/wiki/Gamma-ray_burst ) So we have something like supernova explosion, but instead of exploding due to neutrinos (from e+p -> n), this time using gammas - can you think of other than baryon decay mechanisms for releasing such huge energy? NASA news from 2 days: http://www.nasa.gov/press/2014/october/nasa-s-nustar-telescope-discovers-shockingly-bright-dead-star/ about 1-2 solar mass star, with more than 10 millions times larger power than sun ... is no longer considered as a black hole! Where this enormous energy comes from? While fusion or p+e->n converts less than 0.01 matter into energy, baryon decay converts more than 0.99 - are there some intermediate possibilities?
  7. Have you forgotten to add "in contrast to forming infinite density singularity in large matter concentrations" ?
  8. Not me, Hawking radiation means: gather lots of baryons into a black hole, wait until it evaporates (massless Hawking radiation) - and there is this number of baryons less in the universe. Also, if we believe in baryogenesis which create more matter than anti-matter ... it also violated baryon number conservation.
  9. After Stephen Hawking "There are no black holes": http://www.nature.com/news/stephen-hawking-there-are-no-black-holes-1.14583 now from http://phys.org/news/2014-09-black-holes.html : "But now Mersini-Houghton describes an entirely new scenario. She and Hawking both agree that as a star collapses under its own gravity, it produces Hawking radiation. However, in her new work, Mersini-Houghton shows that by giving off this radiation, the star also sheds mass. So much so that as it shrinks it no longer has the density to become a black hole." What is nearly exactly what I was saying: instead of growing singularity in the center of neutron star, it should rather immediately go through some matter->energy conversion (like evaporation through Hawking radiation or in other words: some proton decay) - releasing huge amount of energy (finally released as gamma ray bursts), and preventing the collapse.
  10. Determinant is just a sum over all permutations of products - I don't see a problem here? Cramer formula allows to write inverse matrix as rational expression of determinants - what seems sufficient ... Anyway, there still seems to be required exponential number of terms to find the determinant ... But maybe there can be a better way to just find n-th power of (Grassman) matrix ... ?
  11. Indeed you woud need [math]g_i^{-1}[/math] for such direct inverse, eg. [math](AG)^{-1} = diag(g_i^{-1}) A^{-1} [/math]. However, I think, having a quick (polynomial) way to find determinant of such matrices should be sufficient (no inverse needed). I was fighting with Gauss elimination, but there can appear terms with all combinations - thieir number grows exponentially ...
  12. While determining the existence of Euler cycle (going once through each edge) for given graph is trivial, for Hamilton path (going once through each vertex) it is NP-complete problem (e.g. worth a million dollar). Denoting adjacency matrix of this graph by [math]A[/math] and its number of vertices by [math]n[/math], diagonal elements of [math]A^n[/math] count also eventual Hamilton cycles – the problem is that they also count other length n cycles – going more than once through some vertex. The question is if we could somehow "subtract" those going multiple times through some vertex ... Grassman variables (anticommuting), used in physics to work on fermions, seem to be perfect for this purpose. Assume we have [math]g_1,..., g_n[/math] Grassman variables: [math]g_i g_j = - g_j g_i[/math] what also implies [math]g_i g_i = 0[/math]. So multiplication of [math]n[/math] of such variables is nonzero iff it contains all indexes (vertices). Denote [math]G = diag(g_i)[/math] as diagonal nxn matrix made of these variables. It is now easy to see that: Graph [math]A[/math] contains Hamilton cycle iff [math]Tr((AG)^n) \neq 0 [/math]. Grassman variables can be realized by matrices – so we could see this formula in terms of block matrices ... unfortunately known realizations require e.g. [math]2^n[/math] size matrices, what is not helpful - we get only an implication: If P != NP, then there is no polynomial size matrix realization of Grassman variables. So probably these realizations just require exponentially large matrices, what seems reasonable. We could easily find [math](AG)^{-1}[/math], so maybe there is a way to quickly find [math](1-AG)^{-1}=\sum_{i=0}^n (AG)^i[/math], what should be sufficient? Any thoughts?
  13. Roamer, I think in most of parliamentary open list elections, you choose among candidates from your district (?) - so that they not only represent their parties, but also their regions and so the people who voted for them. Up to single-member districts where people only choose someone directly representing their region. E.g. in Germany (mixed system) they get a card with two lists: of candidates and of parties.
  14. Indeed John, as I have mentioned, in some situations it is impossible to find a voting system satisfying looking basic requirements, like in the entioned Arrow's or Holmstrom's theorem - mainly because of Condorcet's cycles: when preferences are of type A<B, B<C, C<A. It is partially solved in Borda systems - that voters give point values to options and finally there is chosen option with the highest number of points. Quantitatively defining "optimality" function of apportionment is somehow similar - we are choosing apportionment having the best "optimality". As “It has been said that democracy is the worst form of government except all the others that have been tried”, for example probably the biggest problem with dictatorship is finding the proper person and especially his succeeder, we still have have to find the best voting methods for various situations - not only in politics. So how should we choose them?
  15. A nonexistence of the optimal voting system can be proven in many situations, I wanted to propose a general discussion about choosing the best voting systems for various purposes and countries. Especially regarding the most interesting - parliamentary election: there is a territory divided into districts in which people vote for local candidates (usually representing one of parties), and we want to find seat apportionment to fulfill two priorities: 1) The total number of seats of different parties is proportional to their total number of votes, 2) Locally there are chosen those having majority of votes. Unfortunately these two priorities exclude each other – there are usually used systems based on the first one (proportional representation, e.g. Holland, Portugal, Switzerland, Spain, Poland, Brazil) or the second (e.g. single-member district - USA, Canada). As we would like to fulfill both priorities, there are also mixed systems (e.g. Germany), like: half of the seats are chosen by local majorities, half by proportional representation – what has some technical difficulties to fulfill. There is also being developed more modern biproportional apportionment to fulfill both priorities at once, but it based on approximations. I think that in the age of computers we don’t have to be satisfied by some approximation, as we can find the optimal apportionment – if only we would quantitatively define what do we mean by the best apportionment – define “optimality” function, such that we are searching for an apportionment having its highest value. Then a computer can start with some approximation and search nearby apportionments to find the best one. As it is a difficult computational problem, after voting statistics are announced, they could wait e.g. a day when everybody could search for a better apportionment (with higher “optimality” value) and finally the best found would be set. So the question is how to define this “optimality” function – it should be some average (e.g. weighted arithmetic) of terms corresponding to penalties of both priorities: 1) minus distance of proportion of seats and proportion of votes, e.g. the simples Gallagher index. We could also take a more complex distance to emphasize the fact that accuracy is more essential for small parties (e.g. Kullback-Leibler). 2) e.g. sum over districts of minus “the number of voters choosing a candidate with larger number of votes than the winner for this district” – for single-member districts (can be easily generalized). So it is kind of a number of people having a reason to complain as their candidate got more votes than the winer - it is zero if the one having majority has won. There has remained many questions, like what weights, distance, function in 2), averages should we choose. E.g. arithmetic average is more tolerable for compensating than geometric average (e.g. if 3,0 is better than 1,1 ?). Then, what kind of question should be asked – to motivate voters to come and to properly represent their choices. Maybe a choice of a single candidate, maybe a few, or maybe some preferential system? What would be the best voting systems and why – especially for your countries? What do we mean by the best apportionment – how to define the “optimality” function?
  16. Interesting, so why e.g. cosmologists bother what was happening before us, astrophysicists bother what is happening inside a star - where we will never be able to directly measure ... ... or what is the solution of Schrodinger equation for hydrogen - for which we cannot measure the whole wavefunction, we can observe only its far consequence: energy spectrum. Indeed modern physics has lost the objectivity - everybody has own subjective physics ... about which real physics doesn't care about - just making that the world objectively works as it works ...
  17. So any of two paths this photon will choose, it will change momentum of corresponding mirror - be "observed" as you say ... so how can we get interference?
  18. swansont, I am not talking about detecting the event by an subjective observer, but what is objectively happening there ... physics is still working without observers (e.g. millions of years ago). Delta1212, I am asking about something more concrete than probability: e.g. energy or charge distribution. Can energy of a single photon or charge of elementary charge dissipate? It is what would happen if you would see them as pure wave packet (without a mechanism to prevent dissipation - make them soliton).
  19. Reflecting from a mirror means changing momentum of photon and so of the mirror - if you are saying that photon literally goes both ways, does it mean that it has changed momentum of both mirrors? How much? - like there was complete photon going both ways or (as there was initially only single photon) maybe there were two "halves of photon" (or charge in electron interference)? And generally if you want particle/photon go a more complex trajectory, every change of direction needs a momentum transfer with something (vacuum???)
  20. Even for Mach-Zehnder interferometer we draw two classical trajectories, saying only that we don't know which one is chosen. Here situation is even simpler - no interference. I think you are referring to Feynman path integrals? But the basic their approximation is taking the classical trajectory and small variations around it (van Vleck formula) - in QM energy travels through a bit fuzzed classical trajectories.
  21. So imagine a single excited atom produces single optical photon, which comes through a prism and finally is absorbed by another single atom - suggesting that energy has traveled localized through a concrete trajectory between them. While if it would be a wave packet, this energy should be dissipated - especially after the prism. Don't we need some additional mechanisms to hold this wave packet together - make it maintain its shape (become a soliton)?
  22. Particles in quantum mechanics are often seen as wave packets - linear superpositions of plane waves summing to a localized excitation. But wave packets dissipate - for example passing such single photon through a prism, its different plane waves should choose different angles - such single photon would dissipate: its energy would be spread on a growing area ... while we know that in reality its energy remains localized: will be finally adsorbed as a whole by e.g. a single atom. Analogously for different particles like electron - any dependence on momentum while scattering would make such wave packet dissipating (e.g. indivisible elementary charge). How is this problem of dissipating particles solved? Aren't there some additional (nonlinear?) mechanisms needed to hold particles together, make these wave packets maintaining their shapes - being so called solitons?
  23. I am not sure what do you mean by "a point taking up all the density of the universe"? In an infinitesimal volume in the center on neutron star there would be relatively small mass, but just infinitely compressed - the question about GRT doesn't bother about is if matter can be indeed infinitely compressed. Indeed Big Bang is another suspicious assumption, especially that it would definitely exceed the condition of being inside event horizon, what means that the only direction anything could travel is toward the center ... It is one of reasons I prefer Big Bounce scenario, in which we don't need a singularity .... but it is for a different discussion: http://www.scienceforums.net/topic/62644-what-about-2nd-law-of-thermodynamics-in-cyclic-universe-model/
  24. By destruction of baryons I mean e.g. proton decay - that they turn mainly into gammas (nearly complete matter->energy conversion). Such huge explosion in the center should temporarily prevent collapse and finally high energy gammas should leave the star in bursts. If proton decay is possible, in some extreme temperature below infinity it should become statistically essential - neutron star should start "burning its baryons" in the center before start forming the event horizon ...
  25. The event horizon has to evolve in a continuous way - it cannot just emerge in nonzero radius. See for example: http://mathpages.com/rr/s7-02/7-02.htm
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.