Jump to content

Ghideon

Senior Members
  • Posts

    2264
  • Joined

  • Days Won

    20

Posts posted by Ghideon

  1. 52 minutes ago, Mitko Gorgiev said:

    It is taken from the Pink Floyd’s album “The dark side of the moon”.

    An album cover, a work of art, might not be the best staring point when trying to reject established physics.

    52 minutes ago, Mitko Gorgiev said:

    Johann Wolfgang von Goethe has already given the right picture

    As far as I know Goethe was a poet and intended to "portrait" rather than "explain".

     

    25 minutes ago, Mitko Gorgiev said:

    No, you won't see me in November in Stockholm.

    Stockholm may be worth a visit even without being offered a prize. But I'll have to admit that there are better times of the year than November...

     

  2. Before my curiosity regarding the wheel in the opening post is completely gone, can @JamesL provide an answer to the questions below? Please stay on topic and please do not post massive amounts unrelated material.

    Per your ideas, are the following statements true for the wheel you show in the video* in the opening post?  

    1: The wheel will periodically return to the same configuration where all parts are at the same position as at some time before, for instance once every 360 degrees of rotation.

    2: The wheel will slow down and stop if there is no gravity, for instance if the wheel is in free fall or taken far from any source of gravity. 

    3: In gravity greater than earth gravity the wheel will speed up.

     

     

    *) Let's pretend for the sake of discussion that the wheel actually works; meaning, as far as I can tell from the descriptions, that once the wheel is started it continues to rotate without any source of power, but there has to be gravity. It does not matter at this time how it is supposed to work or that it can't work according to known laws of physics, that may be covered later. 

  3. Let's try another approach for this discussion:

    Assume for a while that the wheel works*; meaning, as far as I can tell from the descriptions, that once the wheel is started it continues to rotate without any source of power, but there has to be gravity. Per your idea, are the following statements true? "Yes" or "no" for each question will be enough for now.

    1: The wheel will periodically return to the same configuration where all parts are at the same position as at some time before, for instance once for every 360 degrees of rotation.

    2: The wheel will slow down and stop if there is no gravity, for instance if the wheel is in free fall or taken far from any source of gravity. 

    3: In gravity greater than earth gravity the wheel will speed up.

     

    *) It does not matter at this time how it is supposed to work, that may be covered later. This is just to improve the discussion and allow for further analysis.

  4. 2 hours ago, JamesL said:

    But as you said, you only consider what's known and won't consider what you don't know.

    I did not intend to say that at all*.

    2 hours ago, JamesL said:

    How do you expect science to advance?

    Example from my area of work; I would for instance try to improve resilience in an IT architecture by starting from a geo redundant and diversified cloud based infrastructure. Not by using a (non-working) copy of Carles Babbage's difference engine. 

    Or, in the context of this thread: I would start from contemporary science instead of a 300 year old device that was considered a fraud**.
     

    2 hours ago, JamesL said:

    And you guys being unwilling to ask about math because it's not what you've spent time working on seems to me to be more of the same. It simply is not a discussion if the math a theory is based on and has been demonstrated is ignored because of opinion.

    I asked if you are familiar with Noethers theorem, it is connected to the mathematics of classical mechanics and laws of conservation and hence Newtonian mechanics, forces and perpetual motion. 

     

     

    *) Also I (have to) approach to my area of profession by trying to learn new things and adopt to them continuously; in computer science and engineering there's progress all the time. Standing still, relying only on current knowledge is not a suitable approach. (Not in computer science and also not in life in general)
    **)https://arxiv.org/pdf/1301.3097.p

  5. 13 hours ago, JamesL said:

    The best thing I can tell you is when I am finished and it works, the scientists and engineers at Utrecht University could decide for themselves.

    I'm sorry to say but I can't understand what you try to explain, your post does not clarify anything regarding my questions.

     

    14 hours ago, JamesL said:

    Basically I can be open minded while still considering the laws of physics and how they can actually be applied.

    Do you have an open mind for the possibility that you have misinterpreted the established laws of physics? If so, this forum is a good place to post questions and ask for help! 
    Physics has come a long way since Newton (and Bessler), are you for instance familiar with Noether's theorem? 

  6. 1 hour ago, JamesL said:

    When it works you'll find out that there was nothing free about it.

    I got curious and checked your first source: 

    Quote

    In 1745, he fell to his death while he was building a windmill, taking his secrets with him to the grave.

    Emphasis mine, source https://www.uu.nl/en/utrecht-university-library-special-collections/collections/early-printed-books/scientific-works/das-triumphirende-perpetuum-mobile-orffyreanum-by-johann-bessler

    Questions: How do we know that what you intend to build is Bessler's design? Where did you get hold of the details of Bessler's work?

    (I'm also expecting a response to my earlier questions)

  7. 8 hours ago, JamesL said:

    The wheel accelerates and then continues to run on conserved energy.

    and for example

    4 hours ago, JamesL said:

    This disc can be called the principle piece of my machine. Accordingly, this wheel consists of an external wheel (or drum) for raising weights which is covered with stretched linen.

    Sorry, I have some trouble to follow your argumentation and description. To avoid confusion and to allow for discussion:
    -Are you claiming the device you are building will actually perform perpetual motion; breaking established laws of physics? 
    Or:
    -Is this a mechanics/engineering project where you want to repeat the any fraud committed by Bessler to make his wheel look like perpetual motion? For instance by hiding springs, batteries, motors or other devices.

    9 hours ago, JamesL said:

    And I know you guys don't like math so........

    That generalisation that does not apply to me. 

  8. 2 hours ago, JamesL said:

    Where I am at on my build

    Nice build. Bessler did not construct any device that displayed perpetual motion; such devices does not work*. I think any current discussion is more about whether a deliberate fraud was committed or not. Here is a paper you may find interesting:

    Quote

    The remarkable life of Councillor Orffyreus was first told in a biographical dictionary compiled in the late 18th century by Friedrich Wilhelm Strieder, the court librarian and archivist of Hesse-Kassel.24 In the 19th century, the story was repeated in popular German collections of curiosities.25,26 Strangely, the substance of those accounts — which establishes that Orffyreus perpetrated a deliberate fraud— escaped the attention of many of the authors who wrote about him in the 20th century.

    Source:  https://arxiv.org/pdf/1301.3097.p (Johann Ernst Elias Bessler was known as Orffyreus)

    *) according to currently established laws of physics; supported by observations and theories. 

  9. 22 hours ago, studiot said:

    In particular physical (ie thermodynamic) entropy and other thermodynamic properties are calculated on the basis of 'perfect' or ideal processes.

    Thanks! This helps identifying where I need more reading / studying, I'm of course aware of perfect or ideal processes. My world view though is biased by working with software and models that can be assumed to be 'ideal' but are deployed in a 'non ideal' physical reality where computation and storage/retrieval/transmissions of (logical) information is affected by faulty components, neglected maintenance, lost documentation, power surges, bad decisions, miscommunications and what not. 

    I think I should to approach this topic more in terms of ideal physics & thermodynamics. +1 for the helpful comment.

  10. 21 hours ago, joigus said:

    I assume you mean that @studiot's system doesn't really change its entropy? Its state doesn't really change, so there isn't any dynamics in that system? It's the computing system that changes its entropy by incrementally changing its "reading states." After all, the coin is where it is, so its state doesn't change; thereby its entropy doesn't either. Is that what you mean?

    Thanks for your input. I'll try to clarify by using four examples based on my current understanding. Information entropy in this case means the definition from Shannon. By physical entropy I mean any suitable definition from physics*; here you may need to fill in the blanks or highlight where I may have misunderstood** things.

    1: Assume information entropy is calculated as per Shannon for some example. In computer science we (usually) assume an ideal case; physical implementation is abstracted away. Time is not part of the Shannon definition and physics plays no part in the outcome of entropy calculation in this case.

    2: Assume we store the input parameters and/or the result of the calculation from (1) in digital form. In the ideal case we also (implicitly) assume unlimited lifetime of the components in computers or unlimited supply of spare parts, redundancy, fault tolerance and error correction so that the mathematical result from (1) still holds; the underlaying physics have been abstracted away by assuming nothing ever breaks or that any error can be recovered from. In this example there is some physics but under the assumptions made the physics cannot have an effect on the outcome.

    3: Assume we store the result of the calculation from (1) in digital form on a real system (rather than modelling an ideal system). The lifetime of the system is not unlimited and at some future point the results from (1) will be unavailable or if we try to repeat the calculation based on the stored data we may get a different result. We have moved from the ideal computer science world (where I usually dwell) into an example where the ideal situation of (1) and (2) does not hold. In this 3rd case my guess is that physics, and physical entropy, play a part. We loose (or possibly get incorrect) digital information due to faulty components or storage and this have impact on the Shannon entropy for the bits we manage to read out or calculate. The connection to physical entropy here is one of the things I lack knowledge about but I'm curious about. 

    4: Assume we store the result of the calculation from (1) in digital form on an ideal system (limitless lifetime) using lossy compression****.  This means that at a later state we cannot repeat the exact calculation or expect identical outcome since part of the information is lost and cannot be recovered by the digital system. In this case we are still in the ideal world of computer science where the predictions or outcome is determined by computer science theorems. Even if there is loss of information physics is still abstracted away and physical entropy plays no part.

    Note here the similarities between (3) and (4). A computer scientist can analyse the information entropy change and the loss of information due to a (bad) choice of compression in (4). The loss of information in (3) due to degrading physical components seems to me to be connected to physical entropy. Does this make sense? If so:

    On 3/4/2022 at 11:08 PM, joigus said:

    It's very tempting to me to start talking about control parameters and how they really determine what information is available to anyone trying to describe a system, but it would make it a discussion to heavily imbued with a purely-physics outlook.

    It would be interesting to see where control parameters*** fits into "example 3 vs 4" since both have similar outcome from an information perspective but only (3) seems related to physics. 

     

     

    *) assuming a suitable definition exists
    **) Or forgotten, it's a long time since I (briefly) studied thermodynamics.
    ***) feel free to post extra references; this is probably outside my current knowledge.
    ****)
    This would be a bad choice of implementation for this example in a real case, it's just used here to illustrate and compare reasons for loss of information. https://en.wikipedia.org/wiki/Lossy_compression

     

  11. On 3/13/2022 at 4:23 PM, joigus said:

    I would seriously would like this conversation to get back on its tracks.

    I agree. 

    On 3/13/2022 at 4:23 PM, joigus said:

    I think entropy can be defined at many different levels depending on the level of description that one is trying to achieve. In that sense, I think it would be useful to talk about control parameters, which I think say it all about what level of description one is trying to achieve.

    Every system (whether a computer, a gas, or a coding machine) would have a set of states that we can control, and a set of microstates, that have been programmed either by us or by Nature, that we can't see, control, etc. It's in that sense that the concept of entropy, be it Shannon's or Clausius/Boltzmann, etc. is relevant.

    I (yet) lack the knowledge about the role of control parameters so I'll make a general reflection for possible further discussion. When calculating Shannon entropy in the context of computer science the level of interest is usually logical. For instance your calculations of the coin example did not need to bother about the physical coins and papers to get to a correct result per the definition of entropy. Time is not something that affect the solution*. Physical things on the other hand such as papers, computers, and storage devices do of course decay; even if it may take considerable time the life span is finite. Does this make sense in the context of levels of description above?

    If so, then we may say that the computer program cannot read out exact information about each physical parameter that will cause such a failure. Failure in this case means that without external intervention (spare parts or similar) the program halts, returns incorrect data or similar bond what builtin fault tolerance is capable of handling. Does this relate to your following statement?

    On 3/13/2022 at 4:23 PM, joigus said:

    It's my intuition that in the case of a computer, the control parameters are the bits that can be read out, while the entropic degrees of freedom correspond to the bits that are being used by the program, but cannot be read out --thereby the entropy. But I'm not sure about this and I would like to know of other views on how to interpret this.

     

    Note: I've probed at a physical meaning of "cannot read out" in my above answer; an area I'm less familiar with but your comments triggers my curiosity. There are other possible aspects; feel free to steer the discussion towards what interests you @joigus.

     

     

    On 3/13/2022 at 4:23 PM, joigus said:

    The fact that Shannon entropy may decrease doesn't really bother me because, as I said before, a system that's not the whole universe can have its entropy decrease without any physical laws being violated.

    I will think about this for a while before answering; there might be interesting aspects to this from a practical point of view.  

     

     

    5 hours ago, SuperSlim said:

    What fun you must be having

    That seems like a valid conclusion; I would end my participation in this thread if it was not fun.

     

    *) Neglecting the progress of questions and answers; each step, once calculated, do not change. 

  12. 5 hours ago, SuperSlim said:

    According to you a computer can be switched off and still be computing! 

    Your conclusion is, intentionally or unintentionally, incorrect.  

    5 hours ago, SuperSlim said:

    My guess is you probably think data and information are different things too.

    Your guess is incorrect. Data and information are different concepts and the differences are addressed differently depending on the context of discussion. Sorting out the details may be better suited for a separate thread.

    5 hours ago, SuperSlim said:

    You provide an example: the Swedish language without the extra marks. A change of encoding that makes almost no difference to the information content.

    A quick example, here are three different Swedish phrases with very different meaning*. Only the dots differ: 

    får får får
    får far får
    far får får

    A fourth sentence with a completely different meaning:

    far far far

    Without the dots the first three examples and the last example are indistinguishable and that has impact on the entropy.

    5 hours ago, SuperSlim said:

    So it has about the same entropy.

    The example addresses the initial general question about decreasing entropy. "About the same" is too vague to be interesting in this context.

     

    5 hours ago, SuperSlim said:

    completely dumbass

    I take that and similar entries as an illustration of entropy as defined by Shannon; the redundancy in the texts allows for this to be filtered out without affecting the discussion. And if the level of noise is too high from a specific sender it may be blocked or disconnected from the channel altogether.

     

    *) (approximate) translations of the four examples
    1: does sheep give birth to sheep
    2: does father get sheep
    3: father gets sheep
    4: go father go

  13. 14 hours ago, SuperSlim said:

    The way to really consider the Shannon entropy is as a sender, a receiver, and a channel. It's about how to encode a set of messages "efficiently". I consider your example wouldn't change the coding efficiency much; not many words would be "surprising".

    Explicit definition of sender, receiver and channel is not required. A difference between 7 bit and 8 bit ascii encoding can be seen in the mathematical definition for Shannon entropy. 

     

    4 hours ago, SuperSlim said:

    What I meant was no reader of Swedish would find the missing marks surprising, because they expect to see them. So they would understand written Swedish with or without the marks; it's like how you can ndrstnd nglsh wtht vwls n t.

    The example I provided is not about what a reader may or may not understand or find surprising (that is subjective), it's about mathematical probabilities due to the changed number of available symbols. In Swedish it is trivial to find a counter example to your claim. Also note that you are using a different encoding than the one I defined so your comparison does not fully apply.

  14. 15 minutes ago, ScienceNostalgia101 said:

    what factors determine whether they cushion your fall or act more lie a floor?

    Some factors and examples of a cause:

    -Thickness of the layer of snow (due to amount of snow fallen or wind)
    -Density/water content; dry or wet snow (temperature)
    -Iced layer under a (too thin) layer of fresh snow (dry, cold snow falling on wet snow) 

    Relate but maybe not within scope:
    -Flat or sloped surface (due to wind or terrain for instance) (This may not qualify as a "floor")
    -Iced surface on the fallen snow (wet snow followed by drop of temperature) (This may not qualify as "freshly fallen")
    -Snow compacted by for instance a skier (not qualifying as "freshly fallen")

  15. A note on the initial question @studiot; How can Shannon entropy decrease. 

    It’s been a while since I encountered this so I did not think of it until now, it's practical case where Shannon entropy decreases as far as I can tell. 
    The Swedish alphabet has 29 letters* ; the english letters a-z plus "å", "ä" and "ö". When using terminal software way back in the days the character encoding was mostly 7-bit ascii which lacks Swedish characters åäö. Sometimes the solution** was to simply 'remove the dots' so that “å”, “ä”, “ö” became “a”, “a”, “o”. This results in fewer symbols and increased probability of “a” and “o”. This result is a decreased Shannon entropy for the text entered into the program compared to the unchanged Swedish original.

    Note: I have not (yet) provided a formal mathematical proof so there’s room for error in my example.
     

    *) I’m assuming case insensitivity in this discussion
    **) another solutions was to use characters “{“, “}” and “|” from 7-bit ascii as replacements for the Swedish characters. 

  16. 15 hours ago, SuperSlim said:

    Because . . . you said I is some information and information is always encoded.

    Ok. I had hoped for something more helpful to answer the question I asked.

    15 hours ago, SuperSlim said:

    Seriously, if you handed in an assignment that said "I is information" then didn't specify what kind, what physical units, what the encoding is, a professor would probably not give you a passing grade.

    I tried to sort out if I disagreed with @studiot or just misunderstood some point of view, I did not know this was an exam that required such a level of rigor. "List" was just an example that studiot posted, we could use any structure. Anyway, since you do not like generalisations I tried, here is a practical example instead, based on Studiot's bookcase.

    1: Studiot says: “Can you sort the titles in my bookcase in alphabetical order and hand me the list of titles?”
    me: “Yes”  
    2: Studiot:  “can you group the titles in my bookcase by color?”
    me “yes, under the assumption that I may use personal preferences to decide where to draw the lines beteen different colors”  
    3: Studiot: “can you sort the titles in my bookcase in the order I bought them?” Me: “No, I need additional information*”

    I was curious about the differences between 1,2 and 3 above and if and how it applied to the initial coin example. One of the aspects that got me curious about the coin example was that it seemed open for interpretation whether it is most similar to 1,2 or 3. I wanted to sort out if that was due to my lack of knowledge, misunderstandings or other.

     

    15 hours ago, SuperSlim said:

    however, getting those marks in that exam means you need to understand it in a formal way; you need to be able to trot out those equations.

    Thanks for the info but I already know how exams work.  

     

    *) Assuming, for this example, that the date of purchase is not stored in Studiots bookcase.

  17. 2 hours ago, SuperSlim said:

    Encoding is only important if you want to encode some . . . information!

    I did not want to encode information, I tried to generalise @studiot's example so we could compare our points of view. You stated I should define encoding:

    15 hours ago, SuperSlim said:

    If that's what you want I to be, you should also define how the information is encoded.

    My question is: why?

     

  18.  

    12 hours ago, SuperSlim said:

    If that's what you want I to be, you should also define how the information is encoded.

    Why is encoding important in this part of the discussion? In the bookcase example, does it matter how the titles are encoded?

    And in the coin example @studiot used "0101" (0=no, 1=yes); Shannon entropy is as far as I know unaffected if we used N=no, Y=yes instead, resulting in "NYNY".

     

    12 hours ago, SuperSlim said:

    'sigh'.

    ?

  19. image.thumb.png.7295a969b498f0d9a5fc1ec490045c2d.png

    I’m currently on a skiing trip and these natural sculptures of snow-covered trees make me think of pareidolia* and what triggers the phenomenon. How come I could easily spot a “yeti” but no “elephants”? Is pareidolia affected by the context? I do not know but my curiosity is triggered…

     

    By the way, here in the middle, is where I see the "yeti" :-)

    Spoiler

    image.png.94f60e0f068e83af6c9b09ce78feb805.png

     

    *) Pareidolia is the tendency for perception to impose a meaningful interpretation on a nebulous stimulus, usually visual, so that one sees an object, pattern, or meaning where there is none. https://en.wikipedia.org/wiki/Pareidolia 

  20. 21 hours ago, studiot said:

    I don't see that subsequent decrypting or expansion is relevant.

    Here is an example of what I mean.

    A list of books on the top shelf of my bookcase is information.

    However such as list may or may not actually be drawn up or exist.

    Yet the information exists and is still available and could be obtained by looking along the shelf.

    Even if the list is drawn up, say by taking photographs, it may never actually be read.

    So neither the drawing up of the information nor the subsequent processing (reading) is necessary to the existence of the information itself.

    The information exists, because the shelf of books exists.

    Thanks for your thoughts and comments. Maybe our example can be expressed mathematically? At least just to find where we share an opinion or disagree?

    Let I be information.
    J is some information that can be deduced from I.
    Some function f exists that produces J given I as input. f:IJ
    Information K is required to create the function f. This could be a formula, an algorithm or some optimisation. 
    In the bookcase example I is the book shelf and J is the list of books. Function is intuitively easy, we create a list by looking at the books.

    Does this make sense? If so, in the bookcase example we have:

    JI  KI No additional information needed to create a list, we look at the books (maybe K=empty is an equally valid way to get to the same result).

    In any way the Shannon entropy does not change for J or I as far as I can tell, the entropy is calculated for a fixed set of books and/or a list. 

    My curiosity is about non-trivial f and K. For instance in the initial example with the coin and if JI and KI holds in this case. And if there is any difference how does entropy of K relate to entropy of I and J if such a relation exists. I. consider the function that produces the questions in the initial coin example to be "non-trivial" in this context.
    Is the thermodynamic analogy is open vs closed system? 

     


     

    9 hours ago, studiot said:

    Anyons

    Thanks! Something new to add to my reading list.

  21. 13 minutes ago, Jalopy said:

    A feature that artificial intelligence has not been able to create in robots. 

    Back propagation* is used in machine learning, for instance in neural networks. Are you claiming that none of these have been applied to robotics?

    19 minutes ago, Jalopy said:

    To teach a robot how to learn new data, it's a matter of programming the robot to process data in its envioronment according to a pyramid of value. 

    For example, factor A = value 1 factor B = 2 points, factor C = 3 points

    The proposed idea does not make much sense, sorry.

     

    *) See for instance https://en.wikipedia.org/wiki/Backpropagation


     

     

  22. 20 hours ago, joigus said:

    Even if there is no evolution, any setting that involves probability requires an ensemble: An infinite set of instantiations of the system that justifies the use of probability. I wonder if that's what you mean by "globally."

    (Bold by me) Thanks for your input! Just to be sure, in this context an "ensemble" is the concept defined in mathematical physics* ?  
    I may need to clarify what I tried to communicate by using "globally". I'll try an analogy since I'm not fluent in the correct physics / mathematics vocabulary. Assume I looked at Studiot's example from an economical point of view. I could choose to count the costs for a computer storing the information about the grid, coin and questions. Or I could extend the scope and include the costs for the developer that interpreted the task of locating the coin, designed questions and did programming. This extension of the scope is what I meant by "globally". (Note: this line of thought is not important; just a curious observation; not necessary to further investigate in case my description is confusing.)

    On 2/26/2022 at 5:55 PM, joigus said:

    I suppose --correct me if I'm wrong-- they will be more likely to run along some paths than others. And my intuition is that the sequence of configurations would have a strong dependence on initial conditions too, something that's not a characteristic of thermal-physics.

    As far as I can tell you are correct. If we would assume the opposite: "all paths of execution of a program is equally probable" it is easy to contradict by a simple example program. 

     

    5 hours ago, studiot said:

    I'm not sure about the role of storing the information or what difference it makes.

    Surely the situation simply depends upon whether such information is available or not in the system, rather than whether it is strored or retrieved somewhere ?

    I am not sure. For instance when decrypting an encrypted message or expanding some compressed file, does that affect the situation? 

     

    *) Reference: https://en.wikipedia.org/wiki/Ensemble_(mathematical_physics). Side note: "ensemble" occurs in machine learning, an area more familiar to me than thermodynamics.  

  23. On 2/26/2022 at 5:55 PM, joigus said:

    I think interesting features are at play in thermodynamic systems that are not necessarily that relevant when it comes to computers.

    I’m used to analyze digital processing of information but I’ve not thought about computations that takes place as part of designing an algorithm and/or programming a computer. The developer is interpreting and translating information into programs which is a physical process affecting entropy? In studiots example the questions must come from somewhere and some computation is required to deduce the next question. This may not be part of the example but I find it interesting; should one isolate the information about the grid and coin or try to look “globally” and include the information, computations etc that may be required to construct the correct questions?

    A concrete example; the questions about columns can stop once the coin is located. This is a reasonable algorithm but not the only possible one; a computer (or human) could continue to ask until the last column; it's less efficient but would work to locate the coin. Do the decision and computations behind that decision affect entropy? I don't have enough knowledge about physics to have an opinion.

     

    On 2/25/2022 at 1:56 PM, studiot said:

    Secondly since all the information is contained in the questions and answers, the configuration of 'the board' is irrelevant.
    The cells may be arranged in a line or a ring or scattered.

    I agree, at least to some extent (it may depend on how I look at "all the information" ). By examination we can extract / deduce a lot of information from the questions and answers but it is limited. The following may be rather obvious but I still find them interesting.

    1: Given the first question “Is it in the first column?” and an assumption that the question is based on known facts we may say that the questioner had to know about the structure of the cells. Otherwise the initial question(s) would be formulated to deduce the organization of the cells? For instance the first question could be "Are the cells organised in a grid?" Or "Can I assign integers 1,2,...,n to the cells and ask if they contain a coin?" 

    2: There is no reason to continue to ask about the columns once the coin is located in one of the columns. This means that the number of columns can’t be deduced from the questions.

    The reasoning in 1 & 2 is as far as I can tell trivial for a human being but not trivial to program.

    (I'm travelling at the moment; may not be able to respond promptly)

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.