Jump to content

Information symmetry


Fred56

Recommended Posts

Just in case anyone thinks my rants about a problem with our 'view' of information are out the window, heres something I found about quantum superposition used measure of information content:

The Shannon entropy of a message is the number of binary digits, i.e. "bits" needed to encode it. While the structure, quality, or value, of the information in Shannon entropy may be an unknown, the quantity of information can be known. Shannon entropy and thermodynamic entropy are [an] equivalent.

 

The universal laws of nature are explained in terms of symmetry. The completed infinities, mathematician Georg Cantor's infinite sets, could be explained as cardinal identities, akin to "qualia" --Universally distributed attributes-- from which finite subsets, and elements of subsets (quantum decoherence-wave function collapse) can be derived.

 

Completed infinities, called "alephs" are distributive in nature, similar to the way that a set of "red" objects has the distributive property of redness (qualia). Properties, or "attributes" like red are numbers in the sense that they interact algebraically according to the laws of Boolean algebra. Take one object away from the set of red objects and the distributive number "red" still describes the set. The distributive identity (attribute) "natural number" or "real number" describes an entire collection of individual objects.

 

The alephs can be [mapped] into a one to one correspondence with a proper subset of of themselves. The "infinite" Cantorian alephs are really distributive (qualia).

 

Yet, if we have a finite set of 7 objects, the cardinal number 7 does not really distribute over its individual subsets. Take anything away from the set and the number 7 ceases to describe it -wave function collapse-condensation into specific localization?.

 

Symmetry is analogous to a generalized form of self evident truth, and it is a distributive-attribute via the laws of nature, being distributed over the entire system called universe. A stratification of Cantorian alephs with varying degrees of complexity. Less complexity = greater symmetry = higher infinity-alephs. So the highest aleph, the "absolute-infinity" distributes over the entire set called Universe and gives it "identity".

 

The highest symmetry is a distributive mathematical identity -so a total unknown but possibly analogous to a state of "nothingness". This fact is reflected in part, by the conservation laws.

 

So an unbound-infinite-potentia and a constrained-finite-bound-actuality, are somehow different yet the same. The difference and sameness relation is a duality. Freedom (higher symmetry) and constraint-complexity-organizational structure (lesser symmetry) form a relation that can be described by an invariance principle.” -Russell E. Rierson

 

He is talking about the informational "potential" of entanglement, and he seems to be saying there might be a (mathematical) problem...

We can think of the entropy law as describing what we can term the entropy principle: a universal, fundamental, conserved tendency toward irreversibly randomizing particles. In its extreme limit, the entropy principle is predicted to result in the "heat death" of the universe—a condition wherein chaos reigns. Yet, this prediction—indeed, the entropy law itself—ignores the existence of life, which expresses increasing nonrandomness. The simplist, most parsimonious way to account for life is to postulate that the irreversibility implied in the entropy law is countered by another principle that imposes nonrandomness on elements of nature. This principle is enformy—the universal, fundamental, conserved tendency toward increasing complexity.

This guy is saying we need to redefine a certain "well-understood" principle -which incidentally projects straight to 'classical' information.

And some questions (I know everyone just loves questions...):

Since all uniquely quantum phenomena are in some way a consequence of interference, all uniquely quantum phenomena are somehow a consequence of non-commutativity. ...Therefore, we could add to Feynman’s list the fundamental problems of understanding the origin of non-commutativity and the intimately related problem of understanding the origin of Planck’s constant of action.

The problem, then, is to answer the following inter-related questions:

1. Why do we have to represent quantum processes and states by complex-valued mathematical objects?

2. Why the superposition principle? — That is, why do we represent quantum states and processes by objects that add up linearly?

3. Where does the Born rule come from? Why is this the right way to calculate probabilities?

4. Why non-commutativity?

5. Why does Planck’s constant of action have the particular magnitude that it has?

Maybe he thinks there are some problems with our current definitions of things, like, certain information...

Link to comment
Share on other sites

Just in case anyone thinks my rants about a problem with our 'view' of information are out the window, heres something I found about quantum superposition used measure of information content:

One has to allow for the possibility that the material is wrong.

 

"We can think of the entropy law as describing what we can term the entropy principle: a universal, fundamental, conserved tendency toward irreversibly randomizing particles. In its extreme limit, the entropy principle is predicted to result in the "heat death" of the universe—a condition wherein chaos reigns. Yet, this prediction—indeed, the entropy law itself—ignores the existence of life, which expresses increasing nonrandomness. The simplist, most parsimonious way to account for life is to postulate that the irreversibility implied in the entropy law is countered by another principle that imposes nonrandomness on elements of nature. This principle is enformy—the universal, fundamental, conserved tendency toward increasing complexity."

 

This shows a basic misunderstanding of the second law. "Life" isn't a closed system and does not represent a reduction in overall entropy even though local entropy is decreased.

Link to comment
Share on other sites

I agree, you have spotted the odd man out. This bunch appears to be trying to describe the whole show in terms of 'design', ...er, anyway I wanted to see what their logicalism was.

What brought you to the conclusion that they are saying life is a "closed sytem" btw?

Can you comment on the Born rule?

 

Chapter 2

 

The 'new' approach to a definition of 'information', and what is meant by 'measurement', is still somewhere off in the distance, but we already know (and have known for more than 70 years) that photons, 'light', are tiny packets of energy. Therefore any 'information' that photons carry is due to their energy -which is proportional to their frequency.

The classical model of thought, and 'external information' sees two 'isolated systems', the one being observed, which contains a 'complete map' of information --except that the other system, the observer, can't access all of it easily, but otherwise it's all there, an ontological existence in and of itself.

The observer 'obtains' this information and 'processes' it, which involves an electrical and biochemical system, and is therefore a energy-using process, but the 'thoughts' themselves are 'freely' available to the epistemological mind.

Obviously the classical model is incorrect and we need to go back to the metaphorical drawing board. But the classical model is still useful. So we need to ensure what is intended when the term 'information' is used, especially now that quantum 'information' -an indeterminate state, has become 'available'.

Assuming that information has mass and behaves like heat does, quantum information (a potential which hasn't "arrived" in the world in 'real' terms, but can be manipulated and "stored" like a memory), seems to be a new kind of resource that for the time being, refuses to accept any well-defined label. Is an entangled state a 'real' chunk of information 'mass', or not until it gets 'counted'? In which case what, if anything does an entanglement contribute? How can we 'store' something that isn't there yet? Obviously, because this is possible, then the problem is somewhere else...

 

What's the difference between:

1. the 'information' that is 'in' my brain because of (a process that 'tracks') change in the external world (knowledge of the world or epistemology -is unstable)

2. the observed 'information' (which requires the above change in my brain) 'in' that external world (its existence or ontology -is stable)

3. an entangled quantum state which 'contains' (potential) information, (which “doesn't” exist until it is observed -and is ontically “unstable")

4. an 'observable' quantum of information -a bit or a photon (which “always” exists -has a stable ontic “form”)

??

 

Is brain information (our memory and learning) any different from a book, or a library, or equivalent to lots of bits in a big 'high capacity' computer of some kind? Do the individual synapes represent bit-stream channels, and are there several levels (stratification) of information, so that increasingly complex 'representation' requires complex coding 'algorithms' and efficient 'stacking' and 'queuing' of 'mind information' in whatever representation it has? Will we have 'complete' or even maximal 'observational data' on the inner workings, and develop a theory of mind that is based on something more 'concrete' (peer-reviewed and experimentally corroborated that gives us insight into ourselves as never before), than our 'recognition' of this character (sentience) in other living things?

 

Why do animals in general, and mammals in particular (us most of all), have such a 'developed' or evolved (egressed?) functionality that allows us to contemplate not just what the universe does, but why, and why it should bother to exist, and especially why bother with (creating) conscious beings able to look at it?

 

We appear to be in a rather fortunate set of (apparently) random coincidences. The universe (at least our bit of it), is finely tuned for the existence (hence the emergence) of life. The sun is an 'ordinary' star (not too big and hot like lots of others we can see). The planet is just inside (by about one diameter) the 'not too hot' zone. We seem to be in "just the right" position, and on a planet that is orbiting a star that has coalesced, along with a few inner rocky planets, from a mixture of heavier elements (cooked to order by a supernova or two), hydrogen and other odds and sods to get 'us', eventually.

 

We (and the rest of the pantheon of biology) are the current model evolution has provided, and our pre-programmed task seems to be to figure out why it made us (without resorting to an external 'intelligence' that hides from us his reasons for making the universe).

The anthropic principle doesn't have a simple resolution:

"Stronger than the anthropic principle is what I might call the participatory principle. According to it we could not even imagine a universe that did not somewhere and for some stretch of time contain observers because the very building materials of the universe are these acts of observer-participancy.

You wouldn't have the stuff out of which to build the universe otherwise. This participatory principle takes for its foundation the absolutely central point of the quantum:

No elementary phenomenon is a phenomenon until it is an observed (or registered) phenomenon."

 

"We will first understand how simple the universe is when we recognize how strange it is."

-John A Wheeler

 

The ability of matter to travel through space in an indeterminate way (as demonstrated by double-slit experiments with different-sized bits of matter, from electrons up to large arrays of carbon atoms -buckyballs), illustrates that measurement is a problem (or impossible) because of this. Up close, particles and atoms can be observed if they are cooled down to within a few degrees, or hundredths of a degree of absolute zero. This reduces the vibrational modes that tend to swamp the quantum signal, with random jittering and emission of IR photons. Nonetheless the cost appears to go up proportionally:

“...[T]o investigate the dynamical properties of the cooling or heating of the atom it is convenient to calculate its von Neumann entropy. ... In the beginning, the total entropy increases strongly due to the entanglement between atom and field. The amount of entanglement then slowly decreases until it reaches its minimum at half of the revival time. Subsequently, the total entropy increases again. The local entropy of the field rapidly approaches the entropy of the atom.

 

Following this, it remains constant until the revival phase. This means that the rate of entropy transport from the atom to the field is the same as the rate for the total entropy (and thus entanglement) decrease.To account for the behaviour of fields, virtual photons particles would have to have properties at variance with relativity. The distinction between so called real particles and virtual particles has become progressively blurred in Standard Theory, particularly as we can now make apparently real particles perform the double slit trick. I think that the hypothesis of virtual particles has outlived its usefulness.

 

What actually happens between moments of particle interaction remains an interesting question. Any attempt to look at the intervening period merely shortens it to the point at which we choose to take a measurement. Except at the point where a particle interacts it seems to consist of a multitude of superposed and/or entangled states. However as soon as it interacts, all but one of the particles multitudinous states become eliminated from history. At that point the path which remains in imaginary time becomes the real time history of the particle." -Peter James Carroll

 

Ruminations

 

The Observable is that universe which is:

either a stable condensation of objects: a mass/energy which projects the structure of its own condensation, so that it orders and binds, or attracts; a record or memory: a store of energy itself as matter.

Or a cloud of unstable potential distance and mass/energy, uncountable of itself: when it collapses or condenses, change becomes observable.

 

Observation is that universe which is:

either a cloud of unstable potentiality which:

orbits uncounted the condensed record and memory of imagination: the store, a mass/energy which attracts and binds, or structures the cloud.

 

Or a stable condensation of observers, who imagine the cloud's structures so that they condense, they collapse into memory records, and are available.

Link to comment
Share on other sites

Time and distance are two "different" properties that we observe. Both exist because of the nature of energy to disperse, and then condense (into atoms, then into stars, and so on). Its otherwise called change. We "observe" and measure this change.

 

It seems that distance is changing, and we assign something called time to this "change in distance", and map this to the "fixed surface" of a rotating body (as we move around it). But both are aspects of change, a fundamental thing about the world. We have to change too (or we stop living), and this is therefore how we "communicate" with and "measure" the world around us.

 

But it doesn't "happen" for free, there is a cost (to the universe and to us lifeforms), This cost is both the required change (expenditure of energy) we need to make (to measure the world and remain alive), and that the world (the universe) also must necessarily make. This is the dispersal (something otherwise known as entropy, the "measurement" of energy change over "time").

Entropy explains lots of stuff.

 

Entropy can be defined mathematically (some of you may, in fact, already know this). It turns out that it can be defined in terms of a density (of states), this is how von Neumann described it, and this gives also the classical and Shannon entropy definitions.

 

The von Neumann definition is:

 

[math] S \equiv -kT_r \{ \hat \rho\ ln \hat \rho \}[/math] , where [math]\ \hat \rho=\sum_k p_k| \phi_k >< \phi_k| [/math]

substituting, this gives:

 

[math] S \equiv -k\sum_j < \phi_k|\sum_k p_k|\phi_k >< \phi_k| \ ln (\sum_k p_k|\phi_k >< \phi_k|\phi_j >)[/math]

expanding, [math]\ ln \hat\rho = 0 + (\hat\rho -1)+ \frac {(\hat\rho -1)^2} {2!} + ...

[/math]

which yields:

 

[math] S \equiv -k\sum_j p_j \ ln p_j[/math]

which is the Gibb's equation of entropy.

 

Since [math] p_j = 1 / \Omega [/math] for a microcanonical assembly,

 

Then [math] S = k\ ln\Omega [/math] , the Boltzmann equation of entropy.

Link to comment
Share on other sites

"Entropy in quantum mechanics (von Neumann entropy)

Main article: von Neumann entropy

In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy". Von Neumann established the correct mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this work a theory of measurement, where the usual notion of wave collapse is described as an irreversible process (the so called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.

 

It is well known that a Shannon based definition of information entropy leads in the classical case to the Boltzmann entropy. It is tempting to regard the Von Neumann entropy as the corresponding quantum mechanical definition. But the latter is problematic from quantum information point of view. Consequently Stotland, Pomeransky, Bachmat and Cohen have introduced a new definition of entropy that reflects the inherent uncertainty of quantum mechanical states. This definition allows to distinguish between the minimum uncertainty entropy of pure states, and the excess statistical entropy of mixtures."[17]

 

Its from the main entry on entropy from wiki.

 

I cant pretend to say I understand enough to understand what is what. Though it sounds like later views of entropy on a QM scale is one still under scrutiny.

Link to comment
Share on other sites

Right. There are certain assumptions that have to be made to make the above 'work' as a definition (mathematical formula). Perhaps "others" would like to discuss what problems this poses...?

 

Here they are: just say that there is a phase space which doesn't commute with the normal Hamiltonian:

 

The von Neumann entropy S[ρ|H] is useful in the thermodynamic context, where the interest is a-priori limited to stationary (equilibrium) states. If we want to study the growth of entropy during an ergodization process, we may consider S[ρ|A], where A is a basis (or a “partition” of phase space) that does not commute with H. ...

In the latter case the entropy of a pure state is in general non-zero. In the present study we have derived an explicit expression for the minimum uncertainty entropy S0 (N ) of pure states. This can be associated with the average over the minimum entropic uncertainty... We also have derived an expression for the excess statistical entropy SF [ρ] of mixtures. The latter can be used as a measure for lack of purity of quantum mechanical states, and it is strongly correlated with the von Neumann entropy SH [ρ]. It is bounded from above by (1 − γ), where γ is Euler’s constant.

The total information entropy S[ρ], unlike the von Neumann entropy, has properties that do

make sense from quantum information point of view.

 

--"The information entropy of quantum mechanical states:"

Alexander Stotland 1 , Andrei A. Pomeransky 2 , Eitan Bachmat 3 and Doron Cohen 1

1

Department of Physics, Ben-Gurion University, Beer-Sheva 84105, Israel

2

Laboratoire de Physique The´orique, UMR 5152 du CNRS, Universite´ Paul Sabatier, 31062

Toulouse Cedex 4, France

3

Department of Computer Science, Ben-Gurion University, Beer-Sheva 84105, Israel

 

 

Anyways, getting back to the observer/observed and mass/energy connection (but entropy is pretty well understood as a connection itself, from 'us' anyway).

 

We could model observers as something like a group of individual compartments, partitions, say, of reference, each with a store of learning. And the group manages to communicate and abstract this collection of individual stores into an external, a stable form, vs the relatively unstable and changing form within each member of the group.

 

And this abstracted form grows and becomes more stable, and is the basis, the accumulation, and a partition itself, a collection of many individual and group efforts.

 

The internal mind of H. sapiens externalises, and creates a more stable form of itself, but it is completely meaningless to any other group of observers, who require their own (species-specific) store.

Our telephone conversations, the music and video entertainment we enjoy, our bank transactions—all involve the storage, transmission, and manipulation of digital information in the form of zeros and ones, represented by billions upon billions of bits.

Information is also fungible. These bits take many physical forms, from tiny charges on a transistor, to micron-sized patches of magnetic material, to microscopic burn marks on a CD or DVD. However, all conventional physical bits share one defining feature: A bit is in one state or the other—that is, it is always either zero or one but never both.

Quantum information is completely different.

Quantum information is stored not in bits but in “qubits,” quantum bits whose value can be one or zero but can also be both zero and one at the same time.

An ordinary transistor cannot be both on and off. But if it is small enough, so that the rules of quantum mechanics take over, such an oddity is not only possible but is also typical.

Thus, a single atom can be in what is known as a “superposition” of two different states. For example, an atom’s outermost electron can be spinning with its axis pointing up or down, or it can be in a superposition of up and down.

-books.nap.edu
Link to comment
Share on other sites

Some _personal_ opinions again :)

 

I think this is all healthy refections! The thread is converging back to the point after some diversions, Persistence is the key to progress :)

 

There are many elaborations and questions one could make out of this topic indeed, it's hard to know where to start. But from some past discussion I think a decent starting point is to appreciate roughly the success of classical statistical mechanics, how partitioning and probability works there. Without this background I suspect the objections that I think some of us are trying to put forward are hard to appreciate.

 

I see several relates issues in bringing this forward.

 

In classical stat mech, there is no ambigouity in selecting a partitioning and thus effectively defining your microstructure and probability space (these terms are all related and is sort of different views of the same thing). So everything is ALOT easier due to the idealisations.

 

In a revision of this, the partitioning is ambigous for several reasons, thus making any constructions conditional on this choice.

 

The other issues that possibly touches to gravity and quantum gravity, is that the microstructures must be define in terms of relations to the observer, not only making it observer relative, but also dependent on the observs relational capacity, this connects information capacity and mass, but in a way that is not yet understood.

 

Like Zurek said in the context of "quantum darwinism": "What the observer knows is inseparable from what the observer is".

-- Rev. Mod. Phys 75, 715 2003

 

So not only is the partioning ambigous, the nature of the observer IMO most probably puts constraints on the complexity of partitionings that are possible.

 

Also since the new view is by construction made in a dynamics context the concept of state and dynamics are blurred, so it ontology and epistemology.

 

About entropy, one can ask what do we want it to be? what do we want it to be a measure of? This question also determines the measure itself. The von neumann entropy is IMO at least certainly no divine measure given to us. It's simple and is formally close to the classical counterpart, but if you analyse it with the mentioned issues in the back of your head, it's easy to motivate yourself to find something that is more satisfactory (this is regardless of practical values ANY idealisation has).

 

/Fredrk

Link to comment
Share on other sites

I wonder if anyone will notice the mistake in the expansion above for the substitution?

 

OK, no takers, I made a boo-boo with the log series (every odd term is 'supposed' to be negative, right?).

 

expanding, [math] \ ln \hat\rho = 0 + (\hat\rho -1)- \frac {(\hat\rho -1)^2} {2!} + ...[/math]

...which isn't what I did first time through, doh!:doh:

What I mean is it is negative, in which case it certainly should be as well (I hate it when they do that)...

Link to comment
Share on other sites

Observation: We classify information as the symbols we use for ´messages´, and measure this content, or certainty, that such external symbolic information has (as books, electrical signals or magnetic regions on a spinning disc, or photons from a screen). But it has absolutely zero content unless it is processed, ultimately by some observer, who must expend energy to ´interpret´ the message and its content. So where is the information? If it is the content of the message and this reduces the internal entropy of the observer, what corresponding increase in entropy is needed for this reduction in uncertainty (by the observer´s processing of the message)?

Link to comment
Share on other sites

I think you wonder, how come an observer can increase it's information and certainty spontaneously?

 

I picture this, as usual, as beeing driven by the total "entropy" (entropy should be used with care in this context, so I put it in quotes, but what is the real thing is that the expected change is in this direction), so the environment simply favours evolution of knowlegable inhabitants, until there is a information balance where the observer loose as much as it gains. Analgous to heat exchange, expect generalized. I personally don't see this as a big conceptual mystery at the moment.

 

/Fredrik

 

There are many ideas made along these lines, various decoherence inspired papers where there environment is important. That is quite interesting and probably provides a part of the answer, but not all of it as far as I can see. It does not always in my opinion make sense to consider the environment as an infinite sink in the same that we sometimes do in thermodynamics. This is again an idealisation, but of the wrong kind we need IMO. It seems to be made in the reductionism philosophy in mind in the sense that everything is understood as a simplification as something more complex. But I think that is missing the whole point, that there is a limit to the relatable complexiy for any observer. I think we need to analyse the situation from the right perspective - from the admittedly "incomplete" perspective, rather than to try to understand the incomplete perspective in terms of a reduction of the complete perspective. Ie. analyse the logic of induction, based on incomplete information.

 

/Fredrik

 

The part that is cheating is when the "mass" of the environment is orders of magnitues larger than than of the observer, but what if the mass of the observer is comparable to the reminded of the universe (whatever that means), then we need another strategy of analysis, because ultimately there is a symmetry between the observer and the environment. Because you migh as well chose to say the the environment is the obeserver, and the observer (the reminder of the universe) is the environment of the observer, although twisted.

 

/Fredrik

Link to comment
Share on other sites

Fredrik:

I have considered that the learning process (acquisition of information) which requires energy, must necessarily be equal or proportional to the energy expended. The energy expended in a brain is not the only requirement (an observer must stay alive, and find food and so on).

 

In this sense, all expenditure of energy (including that made to find and ingest food), therefore represents the entropy of the acquired learning (knowledge or information). In other words all observers are the result of their entire lifetime of 'observation': is eating something 'measuring' it, or is there any expenditure that is not part of a measuring or observing 'process'? A brain cannot exist in isolation, it requires a body, a body requires food, and so on.

 

Can you see a problem with the conclusion that knowledge is the sum of all expenditure, for any organism (observer)? Which means that every lifeform is its own knowledge or information store (the entire organism), not just its 'brain' or equivalent?

 

P.S. I guess that means I agree with Mr Zurek's statement: there is no way to separate the information from the observer, because the information is the observer.

Not sure I can say I'm a Quantum Darwinist yet though.

Life is able to acquire information and use it to extend itself. Inanimate objects cannot do this -they don't convert part of themselves into energy, then use it to move around, or observe, and etc..

Link to comment
Share on other sites

Hello Fred, I was away for a few days.

Fredrik:

I have considered that the learning process (acquisition of information) which requires energy, must necessarily be equal or proportional to the energy expended. The energy expended in a brain is not the only requirement (an observer must stay alive, and find food and so on).

Are you not using the terms energy where perhaps "free energy" - that is energy that can be spontaneously converted to work (classically speaking) - might be more appropriate? Which in essense really is more related to entropy IMO, since it measures the amount of energy you can extract for work and still have a spontaneous reaction, which is determined by the total increase of entropy. (ie beeing overall probable)

 

Phrased differently that can loosely speaking be seen as a measure of how unlikely a processes can you drive locally and still have the global process likely? Likely and unlikely are meant to associate to probable vs non-probable and spontaneous vs non-spontaneous.

 

If this is what you mean I think we are describing it similarly.

 

In this sense, all expenditure of energy (including that made to find and ingest food), therefore represents the entropy of the acquired learning (knowledge or information). In other words all observers are the result of their entire lifetime of 'observation': is eating something 'measuring' it, or is there any expenditure that is not part of a measuring or observing 'process'? A brain cannot exist in isolation, it requires a body, a body requires food, and so on.

 

Sure i think in the abstract sense, any interaction is communication. Eating included if we are talking about humans.

 

Can you see a problem with the conclusion that knowledge is the sum of all expenditure, for any organism (observer)? Which means that every lifeform is its own knowledge or information store (the entire organism), not just its 'brain' or equivalent?

 

I am not sure what you mean with expenditure, but perhaps we can see expenditure as a "risk"? To make progress you take risks?

 

P.S. I guess that means I agree with Mr Zurek's statement: there is no way to separate the information from the observer, because the

information is the observer. Not sure I can say I'm a Quantum Darwinist yet though. Life is able to acquire information and use it to extend itself. Inanimate objects cannot do this -they don't convert part of themselves

into energy, then use it to move around, or observe, and etc..

 

Ultimately I think even "in-animate" objects are evolved the same way, except it's complexity level is far lower. In a certain sense perhaps "reproduction" can occur indirectly by propagating your opinion into the environment, because this possilbly "selects" the environment to be more selectable to your alikes? That's of course a gross simiplification but the idea is an interesting possibility. This would mean that this abstract reproduction would be a sort of self-stabilisation or cooperating observers.

 

/Fredrik

Link to comment
Share on other sites

not sure what you mean with expenditure, but perhaps we can see expenditure as a "risk"?

Well, the concept of expenditure and risk are part of established Gaming and complex systems theories, but perhaps energy 'exchange' is a safer term.

The exchange that the external world 'initiates' by projecting images or sounds, say, at us, so it becomes 'obvious', is only a very small amount (if you think about this, it is surprising that any observer is able to learn much at all). We expend, or exchange further energy with external information to form a stable representation, a mental image, of it.

Our brains let us imagine, or project back, various possible reasons, or theories, about this communication, and we are bound to test them to see if our logic is correct (it is a 'safe' event, or it isn't, in which case it might be time for further 'expenditure').

 

This is what humans (and most other mammals) do with a lot of the 'messages' they get from the external world. So learning must involve a necessary usage of the internal energy store (and its ongoing maintenance) to keep the information (that is learning) from dissipating (entropy again).

Life does this by slowing the dissipation process, or riding the surface of it, and harnessing it or accumulating "energy", as knowledge or information, which is the organism itself; we are each a result of "de gustibus unum".

Link to comment
Share on other sites

Everyone who has used a computer knows about "data loss", and the loss of work (documents they have been working on, or other "important" information).

But the loss of external records is inevitable: the burning of the libraries at Alexandria, and computer crashes are examples of the applied science of entropy. External records, messages that are representative, or abstracted information from a human mind, represent absolutely zero information (content), because they are in fact, a channel. We cannot receive any information from a book written in a language we don't understand. So the information in that sense, isn't in the book (you might be able to get another book, of exactly the same make and with the same pages, just different symbols in it).

 

The information is what ends up in (more than) one mind, from all those written words, or spoken words or sounds, music, images (we have very developed visual apparatus), and so on. The channels are noisy; we actually receive only tiny amounts of energy as information from the world, that we must then expend energy making into some kind of internal record, which we know is transient (we can forget it). This is another, well-known practical lesson in the existence of entropy (we even measure it), we are obliged to recall and review our knowledge constantly, and a group of minds can do this more effectively.

 

So books and words, and all methods of acquiring and understanding that we have in the external world (all the computers that analyse characters, or translate languages, or search for combinations of strings of bits), are all channels, analytical in the Shannon sense, and Shannon's ideas of communication (and the entropy changes involved) come into play.

 

We have ideas of certain measurement which logically include the idea of no measurement (zero or no result) and all measurement (infinite or complete results), and are forced to abstract these notions as mere vague symbols, as ideas of something that is beyond (all ideas). We can't communicate with 'nothing': communication, by definition, requires something, which is the channel for that information. We use the same idea for the channel as we do for the messages it 'transfers', from and to us and the world, or each other.

 

part 2:

We can understand the difference between a channel and the information it carries.

 

A copper wire is a channel that can carry several different messages at once (a multiplexed channel), and these days often does so as part of a much wider network of devices (intelligent phones). But the channel by itself can only modulate any message (it always has a certain noise level, or limit to the amount of information it can transmit), it can never provide a message itself, the messages require the medium (the copper wire), but are distinct from the channel. Books do not 'contain' information, they transmit the information that the words (that are understood, or decoded), contain as valid and syntactic elements.

 

A single photon, or its measurement, cannot represent anything more meaningful than a "Schrodinger's cat" state. An indeterminate state -will the message arrive?- is the only possible outcome if there is no known channel, to measure or receive communication from. Sending information back, through the same channel (duplexing), might be possible, but the reception of a message defines any (simplex) channel, and our senses are all one-way receivers. If 1. there is no channel, then 2. there is only a possibility of there being one at some point, so a single photon cannot send any information except this (yes, there is a channel). If the single photon is also encoded (say it's polarised by a certain angle), this can be a message (of 1 bit), a yes/no message (these days, it's common to encode single photons using more than one quantum state 'variable', which is how quantum encryption is done).

 

But you can't send a message with zero photons (or no bits), and a 0 bit isn't 'nothing', it has to be a real physical signal of some kind.

Link to comment
Share on other sites

Where do the messages come from, or go to?

Every real process is a sharply inhomogeneous sequence of discrete events. This empirical idea of conventional science has been especially emphasised within recent advance of the 'science of complexity', even though its conventional, scholar[ly] version always fails to give [a] truly consistent, rigorously derived description of 'events' and their natural 'emergence'.

 

ome physically tangible and relatively large change (event) should necessarily happen in each elementary interaction within a useful computing system (where the particular case of external 'absence of change' is possible, but cannot dominate and practically always hides within it some internal, externally 'invisible' or transient change/event).

 

By contrast, every unitary evolution inherent in the canonical mathematical basis of quantum mechanics (and in 'mathematical physics' in general) means that no event, nothing truly 'inhomogeneous' can ever happen within it ('unitary' means 'qualitatively homogeneous').

 

Therefore, the fundamental contradiction between unitary theoretical schemes of 'quantum information processing', and nonunitary character of any real computation process is evident: unitary quantum computation tries to obtain 'something from nothing', which is directly related to its suspiciously priceless, 'miraculously' increased efficiency with respect to classical, presumably nonunitary computation.

 

The same contradiction can be expressed as explicit violation of the 'energy degradation principle', or (generalised) 'second law of thermodynamics', by the unitary computation: any system, or 'machine', producing a measurable, non-zero change (like actual computation) should also produce a finite, and strongly limited from below, amount of 'waste'/'chaos', or 'dissipation'/'losses', or 'heat energy'; which result cannot, whatever its particular manifestation is, be compatible with the unitary quantum evolution.

 

This something-from-nothing' problem is the main defect of conventional quantum computation theory underlying its other contradictions. The conventional scheme of quantum computation tries to overcome this contradiction by one or another 'combination' of the unitary evolution and nonunitary 'measurement' (or 'decoherent interaction') stages presented as a sort of 'punctuated unitarity'.

 

[C]onventional quantum computation theory tries to reproduce the corresponding structure of standard quantum mechanics itself, where unitarity of the 'main' dynamics is trickily entangled with the explicitly nonunitary 'quantum measurement' processes, though this 'connection' and its components remain quite 'mysterious' and are imposed only formally by the canonical 'quantum postulates'.

 

[T]he unitarity of [a] total computation scheme will be violated by the unavoidable 'measurement' stages, which invalidates, though in an unpredictable and 'inexplicable' way, the conclusions based on unitary system dynamics.

 

[A]ny realistic quantum computation process should include much more involved, dynamically emerging configurations of the participating systems which belong to a higher sublevel of (complex) quantum dynamics and cannot be considered only 'statistically': all the 'dynamical' details hidden in the 'statistical' postulates that describe the standard, 'averaged' quantum dynamics do matter at this higher sublevel of quantum microsystem dynamics.

 

[O]nly the unreduced, dynamically multivalued description of true quantum chaos can provide a causally complete, detailed picture of the irreducibly probabilistic quantum computation dynamics.

--Dynamically Multivalued, Not Unitary or Stochastic, Operation of Real Quantum, Classical, and Hybrid Micro-Machines

A.P. KIRILYUK Institute of Metal Physics, Kiev, Ukraine

 

What's my brain doing?

 

The brain is in a similar environment to most other organs. This 'background' -the mesomorphic structure that supports all our organs, is essentially thermodynamic.

The brain, 'made' out of of special cells that use electric charge (separation of ions like Ca++, and Na+), to communicate, operates on a different level to the cellular background; but hormones, synapse signalling, and other processes mean there is a need to understand the way the neurons control the background (and vice-versa). It's a complex problem, or one for the complexity theorists. Some theories (not many) appear to claim what the brain does is due to its quantum nature (it's a quantum processor). What they don't perhaps realise is that entanglement simply wouldn't have a chance of surviving all the thermal noise.

 

All that can really be said about what's happening in a brain is that "something emerges" from the electrical signalling network. The network requires a structure, or an architecture, to do what it does, and it relies on the background "system" for signalling (its "working principle"), and feedback (hysteresis).

 

Feedback processes are everywhere, and keep chaos, as it were, at bay. This is something the early Greeks figured out...

 

Quantum entanglement is a very unstable property. Several different quantum states can be entangled at once, or superposed (multi-entangled), but this doesn't last very long in the ordinary world. You have to isolate a system (at close to absolute zero), for entanglement to "emerge" from the background.

[/me]

 

But "emergence" is the result of a complex system of biological structures and the electro-magnetic discharges they produce. All of this is intricately dependent upon the geo/climatic conditions of the environment for optimum conditions and survivability.

 

This emergence is simply a form of the "sum of the parts" and a result of the "synergy" built between those complex parts. Its symbiotically entrenched in physicality. So, using the term "emerge" really doesn't say anything "magic" or even remotely specific about thought processes or the weight of conscious-awareness.

 

Does an em wave have a weight?

[/him]

 

Weight is something we measure, and is explained by a (condensed) bit of matter under the influence of a gravitational field. It's ok to say that weight "emerges", because of this (simple) fact...

 

Photons are massless, but are: energy/momentum (their rest mass is zero, but since they are never at rest, this is a "short cut" we make to determine something about the nature of photons).

 

Or alternatively we are reluctant to leave the term out of any equation, because physical theories deal with the momentum of physical (inertial) mass. Photons have a mass/energy equivalent, which means that information (the photons that we "see") has mass/energy also...

Link to comment
Share on other sites

Generation of single optical plasmons in metallic nanowires coupled to quantum dots

 

A. V. Akimov1,4,5, A. Mukherjee1,5, C. L. Yu2,5, D. E. Chang1, A. S. Zibrov1,4, P. R. Hemmer3, H. Park1,2 & M. D. Lukin1

 

1. Department of Physics,

2. Department of Chemistry and Chemical Biology, Harvard University, Cambridge, Massachusetts 02138, USA

3. Department of Electrical and Computer Engineering, Texas A&M University, College Station, Texas 77843, USA

4. P.N. Lebedev Physical Institute RAS, Leninskiy prospect 53, Moscow, 119991, Russia

5. These authors contributed equally to this work.

 

Abstract

 

Control over the interaction between single photons and individual optical emitters is an outstanding problem in quantum science and engineering. It is of interest for ultimate control over light quanta, as well as for potential applications such as efficient photon collection, single-photon switching and transistors, and long-range optical coupling of quantum bits. Recently, substantial advances have been made towards these goals, based on modifying photon fields around an emitter using high-finesse optical cavities. Here we demonstrate a cavity-free, broadband approach for engineering photon–emitter interactions via subwavelength confinement of optical fields near metallic nanostructures. When a single CdSe quantum dot is optically excited in close proximity to a silver nanowire, emission from the quantum dot couples directly to guided surface plasmons in the nanowire, causing the wire's ends to light up.

 

Non-classical photon correlations between the emission from the quantum dot and the ends of the nanowire demonstrate that the latter stems from the generation of single, quantized plasmons. Results from a large number of devices show that efficient coupling is accompanied by more than 2.5-fold enhancement of the quantum dot spontaneous emission, in good agreement with theoretical predictions.

--Nature 450, 402-406 (15 November 2007) | doi:10.1038/nature06230; Received 10 April 2007; Accepted 4 September 2007

 

Many-coloured photons

Ian S. Osborne

 

The ability to detect single photons makes it possible to investigate the quantum properties of light and to implement strategies for quantum cryptography and quantum communications with single photons as the information carriers. To date, photon detectors have come in two guises: They can be designed either for sensitivity at a single energy or over a broad range of energies, but neither option has offered on-chip tunability of the detected wavelength.

 

Gustavsson et al. now describe a frequency-tunable single-photon detector for the microwave regime using a double quantum dot structure. They are able to shift the discrete energy levels of one dot with respect to the other by application of appropriate gate voltages. Using time-resolved charge detection techniques, they can then directly relate the detection of a tunneling electron to the absorption of a single photon, the energy of which corresponds to the tuned energy-level separation between the two dots.

 

-- (see) Phys. Rev. Lett. 99, 206804 (2007).

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.