Jump to content

Ghideon

Senior Members
  • Posts

    2264
  • Joined

  • Days Won

    20

Everything posted by Ghideon

  1. @WAMF have you read about the ideas about electric universe or plasma cosmology*? It may be helpful to compare your ideas against these rejected cosmological models to learn more about issues raised by other members? *) Wikipedia may serve as a starting point and there are links to various papers in the references section: https://en.wikipedia.org/wiki/Plasma_cosmology
  2. An album cover, a work of art, might not be the best staring point when trying to reject established physics. As far as I know Goethe was a poet and intended to "portrait" rather than "explain". Stockholm may be worth a visit even without being offered a prize. But I'll have to admit that there are better times of the year than November...
  3. But their visit here was not completely without friction, so the cycles may be slowing down and the motion eventually come to a halt?
  4. Before my curiosity regarding the wheel in the opening post is completely gone, can @JamesL provide an answer to the questions below? Please stay on topic and please do not post massive amounts unrelated material. Per your ideas, are the following statements true for the wheel you show in the video* in the opening post? 1: The wheel will periodically return to the same configuration where all parts are at the same position as at some time before, for instance once every 360 degrees of rotation. 2: The wheel will slow down and stop if there is no gravity, for instance if the wheel is in free fall or taken far from any source of gravity. 3: In gravity greater than earth gravity the wheel will speed up. *) Let's pretend for the sake of discussion that the wheel actually works; meaning, as far as I can tell from the descriptions, that once the wheel is started it continues to rotate without any source of power, but there has to be gravity. It does not matter at this time how it is supposed to work or that it can't work according to known laws of physics, that may be covered later.
  5. Let's try another approach for this discussion: Assume for a while that the wheel works*; meaning, as far as I can tell from the descriptions, that once the wheel is started it continues to rotate without any source of power, but there has to be gravity. Per your idea, are the following statements true? "Yes" or "no" for each question will be enough for now. 1: The wheel will periodically return to the same configuration where all parts are at the same position as at some time before, for instance once for every 360 degrees of rotation. 2: The wheel will slow down and stop if there is no gravity, for instance if the wheel is in free fall or taken far from any source of gravity. 3: In gravity greater than earth gravity the wheel will speed up. *) It does not matter at this time how it is supposed to work, that may be covered later. This is just to improve the discussion and allow for further analysis.
  6. I did not intend to say that at all*. Example from my area of work; I would for instance try to improve resilience in an IT architecture by starting from a geo redundant and diversified cloud based infrastructure. Not by using a (non-working) copy of Carles Babbage's difference engine. Or, in the context of this thread: I would start from contemporary science instead of a 300 year old device that was considered a fraud**. I asked if you are familiar with Noethers theorem, it is connected to the mathematics of classical mechanics and laws of conservation and hence Newtonian mechanics, forces and perpetual motion. *) Also I (have to) approach to my area of profession by trying to learn new things and adopt to them continuously; in computer science and engineering there's progress all the time. Standing still, relying only on current knowledge is not a suitable approach. (Not in computer science and also not in life in general) **)https://arxiv.org/pdf/1301.3097.p
  7. I'm sorry to say but I can't understand what you try to explain, your post does not clarify anything regarding my questions. Do you have an open mind for the possibility that you have misinterpreted the established laws of physics? If so, this forum is a good place to post questions and ask for help! Physics has come a long way since Newton (and Bessler), are you for instance familiar with Noether's theorem?
  8. I got curious and checked your first source: Emphasis mine, source https://www.uu.nl/en/utrecht-university-library-special-collections/collections/early-printed-books/scientific-works/das-triumphirende-perpetuum-mobile-orffyreanum-by-johann-bessler Questions: How do we know that what you intend to build is Bessler's design? Where did you get hold of the details of Bessler's work? (I'm also expecting a response to my earlier questions)
  9. and for example Sorry, I have some trouble to follow your argumentation and description. To avoid confusion and to allow for discussion: -Are you claiming the device you are building will actually perform perpetual motion; breaking established laws of physics? Or: -Is this a mechanics/engineering project where you want to repeat the any fraud committed by Bessler to make his wheel look like perpetual motion? For instance by hiding springs, batteries, motors or other devices. That generalisation that does not apply to me.
  10. Nice build. Bessler did not construct any device that displayed perpetual motion; such devices does not work*. I think any current discussion is more about whether a deliberate fraud was committed or not. Here is a paper you may find interesting: Source: https://arxiv.org/pdf/1301.3097.p (Johann Ernst Elias Bessler was known as Orffyreus) *) according to currently established laws of physics; supported by observations and theories.
  11. Thanks! This helps identifying where I need more reading / studying, I'm of course aware of perfect or ideal processes. My world view though is biased by working with software and models that can be assumed to be 'ideal' but are deployed in a 'non ideal' physical reality where computation and storage/retrieval/transmissions of (logical) information is affected by faulty components, neglected maintenance, lost documentation, power surges, bad decisions, miscommunications and what not. I think I should to approach this topic more in terms of ideal physics & thermodynamics. +1 for the helpful comment.
  12. Thanks for your input. I'll try to clarify by using four examples based on my current understanding. Information entropy in this case means the definition from Shannon. By physical entropy I mean any suitable definition from physics*; here you may need to fill in the blanks or highlight where I may have misunderstood** things. 1: Assume information entropy is calculated as per Shannon for some example. In computer science we (usually) assume an ideal case; physical implementation is abstracted away. Time is not part of the Shannon definition and physics plays no part in the outcome of entropy calculation in this case. 2: Assume we store the input parameters and/or the result of the calculation from (1) in digital form. In the ideal case we also (implicitly) assume unlimited lifetime of the components in computers or unlimited supply of spare parts, redundancy, fault tolerance and error correction so that the mathematical result from (1) still holds; the underlaying physics have been abstracted away by assuming nothing ever breaks or that any error can be recovered from. In this example there is some physics but under the assumptions made the physics cannot have an effect on the outcome. 3: Assume we store the result of the calculation from (1) in digital form on a real system (rather than modelling an ideal system). The lifetime of the system is not unlimited and at some future point the results from (1) will be unavailable or if we try to repeat the calculation based on the stored data we may get a different result. We have moved from the ideal computer science world (where I usually dwell) into an example where the ideal situation of (1) and (2) does not hold. In this 3rd case my guess is that physics, and physical entropy, play a part. We loose (or possibly get incorrect) digital information due to faulty components or storage and this have impact on the Shannon entropy for the bits we manage to read out or calculate. The connection to physical entropy here is one of the things I lack knowledge about but I'm curious about. 4: Assume we store the result of the calculation from (1) in digital form on an ideal system (limitless lifetime) using lossy compression****. This means that at a later state we cannot repeat the exact calculation or expect identical outcome since part of the information is lost and cannot be recovered by the digital system. In this case we are still in the ideal world of computer science where the predictions or outcome is determined by computer science theorems. Even if there is loss of information physics is still abstracted away and physical entropy plays no part. Note here the similarities between (3) and (4). A computer scientist can analyse the information entropy change and the loss of information due to a (bad) choice of compression in (4). The loss of information in (3) due to degrading physical components seems to me to be connected to physical entropy. Does this make sense? If so: It would be interesting to see where control parameters*** fits into "example 3 vs 4" since both have similar outcome from an information perspective but only (3) seems related to physics. *) assuming a suitable definition exists **) Or forgotten, it's a long time since I (briefly) studied thermodynamics. ***) feel free to post extra references; this is probably outside my current knowledge. ****) This would be a bad choice of implementation for this example in a real case, it's just used here to illustrate and compare reasons for loss of information. https://en.wikipedia.org/wiki/Lossy_compression
  13. I agree. I (yet) lack the knowledge about the role of control parameters so I'll make a general reflection for possible further discussion. When calculating Shannon entropy in the context of computer science the level of interest is usually logical. For instance your calculations of the coin example did not need to bother about the physical coins and papers to get to a correct result per the definition of entropy. Time is not something that affect the solution*. Physical things on the other hand such as papers, computers, and storage devices do of course decay; even if it may take considerable time the life span is finite. Does this make sense in the context of levels of description above? If so, then we may say that the computer program cannot read out exact information about each physical parameter that will cause such a failure. Failure in this case means that without external intervention (spare parts or similar) the program halts, returns incorrect data or similar bond what builtin fault tolerance is capable of handling. Does this relate to your following statement? Note: I've probed at a physical meaning of "cannot read out" in my above answer; an area I'm less familiar with but your comments triggers my curiosity. There are other possible aspects; feel free to steer the discussion towards what interests you @joigus. I will think about this for a while before answering; there might be interesting aspects to this from a practical point of view. That seems like a valid conclusion; I would end my participation in this thread if it was not fun. *) Neglecting the progress of questions and answers; each step, once calculated, do not change.
  14. Your conclusion is, intentionally or unintentionally, incorrect. Your guess is incorrect. Data and information are different concepts and the differences are addressed differently depending on the context of discussion. Sorting out the details may be better suited for a separate thread. A quick example, here are three different Swedish phrases with very different meaning*. Only the dots differ: får får får får far får far får får A fourth sentence with a completely different meaning: far far far Without the dots the first three examples and the last example are indistinguishable and that has impact on the entropy. The example addresses the initial general question about decreasing entropy. "About the same" is too vague to be interesting in this context. I take that and similar entries as an illustration of entropy as defined by Shannon; the redundancy in the texts allows for this to be filtered out without affecting the discussion. And if the level of noise is too high from a specific sender it may be blocked or disconnected from the channel altogether. *) (approximate) translations of the four examples 1: does sheep give birth to sheep 2: does father get sheep 3: father gets sheep 4: go father go
  15. Explicit definition of sender, receiver and channel is not required. A difference between 7 bit and 8 bit ascii encoding can be seen in the mathematical definition for Shannon entropy. The example I provided is not about what a reader may or may not understand or find surprising (that is subjective), it's about mathematical probabilities due to the changed number of available symbols. In Swedish it is trivial to find a counter example to your claim. Also note that you are using a different encoding than the one I defined so your comparison does not fully apply.
  16. Some factors and examples of a cause: -Thickness of the layer of snow (due to amount of snow fallen or wind) -Density/water content; dry or wet snow (temperature) -Iced layer under a (too thin) layer of fresh snow (dry, cold snow falling on wet snow) Relate but maybe not within scope: -Flat or sloped surface (due to wind or terrain for instance) (This may not qualify as a "floor") -Iced surface on the fallen snow (wet snow followed by drop of temperature) (This may not qualify as "freshly fallen") -Snow compacted by for instance a skier (not qualifying as "freshly fallen")
  17. A note on the initial question @studiot; How can Shannon entropy decrease. It’s been a while since I encountered this so I did not think of it until now, it's practical case where Shannon entropy decreases as far as I can tell. The Swedish alphabet has 29 letters* ; the english letters a-z plus "å", "ä" and "ö". When using terminal software way back in the days the character encoding was mostly 7-bit ascii which lacks Swedish characters åäö. Sometimes the solution** was to simply 'remove the dots' so that “å”, “ä”, “ö” became “a”, “a”, “o”. This results in fewer symbols and increased probability of “a” and “o”. This result is a decreased Shannon entropy for the text entered into the program compared to the unchanged Swedish original. Note: I have not (yet) provided a formal mathematical proof so there’s room for error in my example. *) I’m assuming case insensitivity in this discussion **) another solutions was to use characters “{“, “}” and “|” from 7-bit ascii as replacements for the Swedish characters.
  18. Ok. I had hoped for something more helpful to answer the question I asked. I tried to sort out if I disagreed with @studiot or just misunderstood some point of view, I did not know this was an exam that required such a level of rigor. "List" was just an example that studiot posted, we could use any structure. Anyway, since you do not like generalisations I tried, here is a practical example instead, based on Studiot's bookcase. 1: Studiot says: “Can you sort the titles in my bookcase in alphabetical order and hand me the list of titles?” me: “Yes” 2: Studiot: “can you group the titles in my bookcase by color?” me “yes, under the assumption that I may use personal preferences to decide where to draw the lines beteen different colors” 3: Studiot: “can you sort the titles in my bookcase in the order I bought them?” Me: “No, I need additional information*” I was curious about the differences between 1,2 and 3 above and if and how it applied to the initial coin example. One of the aspects that got me curious about the coin example was that it seemed open for interpretation whether it is most similar to 1,2 or 3. I wanted to sort out if that was due to my lack of knowledge, misunderstandings or other. Thanks for the info but I already know how exams work. *) Assuming, for this example, that the date of purchase is not stored in Studiots bookcase.
  19. I did not want to encode information, I tried to generalise @studiot's example so we could compare our points of view. You stated I should define encoding: My question is: why?
  20. Why is encoding important in this part of the discussion? In the bookcase example, does it matter how the titles are encoded? And in the coin example @studiot used "0101" (0=no, 1=yes); Shannon entropy is as far as I know unaffected if we used N=no, Y=yes instead, resulting in "NYNY". ?
  21. I’m currently on a skiing trip and these natural sculptures of snow-covered trees make me think of pareidolia* and what triggers the phenomenon. How come I could easily spot a “yeti” but no “elephants”? Is pareidolia affected by the context? I do not know but my curiosity is triggered… By the way, here in the middle, is where I see the "yeti" *) Pareidolia is the tendency for perception to impose a meaningful interpretation on a nebulous stimulus, usually visual, so that one sees an object, pattern, or meaning where there is none. https://en.wikipedia.org/wiki/Pareidolia
  22. Thanks for your thoughts and comments. Maybe our example can be expressed mathematically? At least just to find where we share an opinion or disagree? Let I be information. J is some information that can be deduced from I. Some function f exists that produces J given I as input. f:I→J Information K is required to create the function f. This could be a formula, an algorithm or some optimisation. In the bookcase example I is the book shelf and J is the list of books. Function f is intuitively easy, we create a list by looking at the books. Does this make sense? If so, in the bookcase example we have: J∈I K∈I No additional information needed to create a list, we look at the books (maybe K=empty is an equally valid way to get to the same result). In any way the Shannon entropy does not change for J or I as far as I can tell, the entropy is calculated for a fixed set of books and/or a list. My curiosity is about non-trivial f and K. For instance in the initial example with the coin and if J∈I and K∈I holds in this case. And if there is any difference how does entropy of K relate to entropy of I and J if such a relation exists. I. consider the function that produces the questions in the initial coin example to be "non-trivial" in this context. Is the thermodynamic analogy is open vs closed system? Thanks! Something new to add to my reading list.
  23. Back propagation* is used in machine learning, for instance in neural networks. Are you claiming that none of these have been applied to robotics? The proposed idea does not make much sense, sorry. *) See for instance https://en.wikipedia.org/wiki/Backpropagation
  24. (Bold by me) Thanks for your input! Just to be sure, in this context an "ensemble" is the concept defined in mathematical physics* ? I may need to clarify what I tried to communicate by using "globally". I'll try an analogy since I'm not fluent in the correct physics / mathematics vocabulary. Assume I looked at Studiot's example from an economical point of view. I could choose to count the costs for a computer storing the information about the grid, coin and questions. Or I could extend the scope and include the costs for the developer that interpreted the task of locating the coin, designed questions and did programming. This extension of the scope is what I meant by "globally". (Note: this line of thought is not important; just a curious observation; not necessary to further investigate in case my description is confusing.) As far as I can tell you are correct. If we would assume the opposite: "all paths of execution of a program is equally probable" it is easy to contradict by a simple example program. I am not sure. For instance when decrypting an encrypted message or expanding some compressed file, does that affect the situation? *) Reference: https://en.wikipedia.org/wiki/Ensemble_(mathematical_physics). Side note: "ensemble" occurs in machine learning, an area more familiar to me than thermodynamics.
  25. I’m used to analyze digital processing of information but I’ve not thought about computations that takes place as part of designing an algorithm and/or programming a computer. The developer is interpreting and translating information into programs which is a physical process affecting entropy? In studiots example the questions must come from somewhere and some computation is required to deduce the next question. This may not be part of the example but I find it interesting; should one isolate the information about the grid and coin or try to look “globally” and include the information, computations etc that may be required to construct the correct questions? A concrete example; the questions about columns can stop once the coin is located. This is a reasonable algorithm but not the only possible one; a computer (or human) could continue to ask until the last column; it's less efficient but would work to locate the coin. Do the decision and computations behind that decision affect entropy? I don't have enough knowledge about physics to have an opinion. I agree, at least to some extent (it may depend on how I look at "all the information" ). By examination we can extract / deduce a lot of information from the questions and answers but it is limited. The following may be rather obvious but I still find them interesting. 1: Given the first question “Is it in the first column?” and an assumption that the question is based on known facts we may say that the questioner had to know about the structure of the cells. Otherwise the initial question(s) would be formulated to deduce the organization of the cells? For instance the first question could be "Are the cells organised in a grid?" Or "Can I assign integers 1,2,...,n to the cells and ask if they contain a coin?" 2: There is no reason to continue to ask about the columns once the coin is located in one of the columns. This means that the number of columns can’t be deduced from the questions. The reasoning in 1 & 2 is as far as I can tell trivial for a human being but not trivial to program. (I'm travelling at the moment; may not be able to respond promptly)
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.