Jump to content

Ghideon

Senior Members
  • Posts

    2264
  • Joined

  • Days Won

    20

Everything posted by Ghideon

  1. A complete answer would be very long and depends on local jurisdictions. In general: Government procurement or public procurement is the procurement of goods, services and works on behalf of a public authority, such as a government agency. To prevent fraud, waste, corruption, or local protectionism, the laws of most countries regulate government procurement to some extent. (paraphrased from wikipedia) Local policy over here: Public procurement must be efficient and legally certain and make use of market competition. It must also promote innovative solutions and take environmental and social considerations into account. (https://www.government.se/government-policy/public-procurement/) I note that this for some reason is posted in engineering so I'll avoid starting a politic discussion .
  2. Edit: I x-posted with @studiot, I may have to edit my response after reading the post above. In this post I’ll separate my thoughts in different sections to clarify. Binary encoding of the location vs the questions Studiots questions lead to the string 0101. The same string could also be found by square by square questions if the squares are numbered 0-15. "Is the coin in square 1,A?", "Is the coin in square 2,A?" and so on. The sixth square has number 5 decimal = 0101 binary. This way of doing it does not encode the yes/no answers into 1/0 as Studiot initially required but it happens to result in the same string. This was one of the things that confused me initially. Illustration: Zero entropy @joigus calculations results in entropy=0. We also initially have the information that the coin is in the grid; there is no option “not in the grid”. Confirming that the coin is in the grid does not add information as far as I can tell; so entropy=0. I think we we could claim that entropy=0 for any question since no question can change the information; the coin is in the grid and hence it will be found. Note that in this case we can not answer where the coin is from the end result alone, zero information entropy does not allow for storage of a grid identifier. Different paths and questions resulting in 0101 To arrive at the string 0101 while using binary search I think of something like this*: 1. Is the coin in the lower half of the grid? No (0) 2. Is the coin in the top left quadrant of the grid? yes (1) 3. Is the coin in the first row of the top left quadrant? No (0) 4. Is the coin in the lower right corner of the top left quadrant? Yes (1) Illustration, red entries correspond to "no"=0 and green means "yes"=1 The resulting string 0101 translates into a grid position. But it has a different meaning than the 0101 that results from studiots initial questions. Just as in studiots case, we need some additional information to be able to interpret 0101 into a grid square. As far as I can tell the questions in this binary search example follows the same pattern of decreasing entropy if we apply joigus calculations. But the numbers will be different since a different number of options may be rejected. *) Note that I deliberately construct the questions so that it results in the correct yes/no sequence. The approach though is general and could find the coin in any position of the grid.
  3. Yes! +1 I may have misinterpreted the translation from questions/answers to binary string. To clarify; If the coin is in square 1A, what is the binary string for that position? My interpretation (that may be wrong!) is that the four answers in your initial example (square 2B) "no", "yes", "no", "yes" are translated to / encoded as "0" "1" "0" "1". This interpretation means that a coin in square 1A is found by: Is it in the first column? - Yes Is it in the first row? - Yes and the resulting string is 11. As you see I (try to) encode the answers to the questions and not necessarily the number of the square. The nuances of your example makes this discussion more interesting in my opinion; "information" and the entropy of that information may possibly have more than one interpretation. (Note 1: I may be overthinking this; if so I blame my day job / current profession) (Note 2: I have some other ideas but I'll post one line of thought at a time; otherwise there may be too much different stuff in each post)
  4. I've read the example again and noted something I find interesting. Initially it is known per the definition of the example that there is a 4 x 4 square board. In the final string 0101 this information is not present as far as I can tell, it could be column 2 & row 2 from any number of squares? This is not an error, I'm just curiously noting that the entropy of the string '0101' itself seems to differ from the result one gets from calculating the entropy step by step knowing the size of the board. A related note, @joigus as far as I can tell correctly determines that once the coin is found there is no option left; the search for the coin has only one outcome "coin found" and hence the entropy is zero. The string 0101 though, contains more information because it tells not only that the search terminated* it also tells where the coin is. Comments and corrections are welcome. *) '1' occurs twice in the string.
  5. I agree that the result of computation is known. How do you retrieve the known result again later; does that require (some small) cost? Note: I'm not question your statement, just curious to understand it correctly for further reading. edit: Here are two papers that may be of interest for those following this thread: 1: Critical Remarks on Landauer’s principle of erasure–dissipation https://arxiv.org/pdf/1412.2166.pdf This paper discusses examples that as far as I can tell are related to @studiot's example of RAM 2:https://arxiv.org/pdf/1311.1886.pdf Thermodynamic and Logical Reversibilities Revisited The second paper investigates erasure of memory: (I have not yet read both papers in detail)
  6. Initial note: I do not have enough knowledge yet to provide an answer but hopefully enough to get a discussion going. As far as I know entropy is a property of a random variable's distribution, typically a measure of Shannon entropy is a snapshot value at a specific point in time, it is not dynamic. It is quite possible to create time-variant entropy by having a distribution that evolves with time. In this case entropy can increase or decrease depending on how the distribution is parameterised by time (t). I would have to read some more before providing an opinion on decreasing Shannon entropy and its connection to physical devices and thermodynamics. An example of what I mean: (source: https://en.wikipedia.org/wiki/Entropy_(information_theory)) In the example above some actions or computations are required to find the low entropy formula. How is thermodynamic entropy affected by that? This is a part I am not sure about yet. This wikipedia page and it's references may be a starting point for adding more aspects to the discussion: https://en.wikipedia.org/wiki/Entropy_in_thermodynamics_and_information_theory
  7. I note that at the time of writing this I am one of a rather limited number of participants in you threads, hence the above applies to me. Thanks for the feedback, I'll focus on other topics.
  8. Are these the same aliens as in your other thread?
  9. It's a bit hard to follow what parts are given as input and what is part of the proof. For instance is (6) a proposition or a conclusion of yours? Can you post the exercise clearly separated from the solution? Note: In real life it is true that a cat is and animal; in an excise though it could be part of the task to spot that the given information is insufficient so it is not possible to prove who killed the cat. Hence some clarity is required to allows for guidance in this case.
  10. I do not follow the physics of your argument, can you please explain? As far as I know gamma* is calculated using an invariant speed of light c. It looks lite you have an observer dependent speed of light, how does that affect gamma? *) Reference: https://en.wikipedia.org/wiki/Lorentz_factor
  11. Some general notes how I may approach these kind of problems. Disclaimer: This is a quick outline of my personal approach, your mileage may vary. - My first language is not English; check if I understand what is written. Otherwise clarify first. - Is there an "obvious" solution? If so, keep that solution for verification/falsification later. - Are the statements constructed to "trick" the reader? (Example: "All apples are blue" is false in reality but an ok premise in an example. "isn't killed by" is probably not a common way to define food (?) ) - Is there a resolution that "should" be true. (Example "Apple is a fruit" is likely true in a well constructed problem. "Apples are not fruits" is less likely. (Double) check solutions that does not make sense in real life) - Is there extra information not needed? Then move on to translate the sentences into logic statements and apply rules. Side note: in the example given I would add that although John likes peanuts we can not say for sure that he will survive without proper treatment; we can't decide whether he is allergic or not. In real life (in my day job for instance) spotting these kind of things may be just as important as the question stated.
  12. The formula for each cell is given in the picture (above the table): The numerical values you need to insert into the multiplication are found in the graph. The document you linked to contains general explanations, can you please specify a little more where you're struggling? This specific example, joint probabilities in general, something else?
  13. A hint that may help when searching: Does this imply that the second question is about another specific inference method? Since the first one explicitly asks for Mamdani, which other fuzzy inference method is usually mentioned as a comparison? Nope. Only hints in this section of the forum.
  14. I did not know of these physicists, can you provide a reference? (A quick search gave me nothing of value.)
  15. There are helpful comments and a solution applicable to this in your other thread, some feedback would be appreciated why that did or did not work: Sorry, not going to visit that link.
  16. The format of the post is tricky to follow but there is a loop for each neighbour, hence each neighbour node will be visited. Note the recursive call to dfs(), are you familiar with the recursive construct and how execution continues? You need to take into account the recursive call stack; what happens after print(1)? Isn't next loop lap a call to dfs? dfs(visited,graph,12)
  17. Quick question; is "anti-quantum" a concept in mainstream physics? I got curious since it was mentioned in another (now closed) thread and I can't remember seeing it in my brief studies of the topic and I'm unable to find references to "anti-quantum" in physics. My guess is that the term was invented as part of a non-mainstream speculation, hopefully someone could confirm or link to some material. Side note: I'm aware of "anti-quantum" in the context of computer science and cryptography. The term refers to "quantum-proof", "quantum-safe" or "quantum-resistant"; algorithms that are thought to be secure against a cryptanalytic attack by a sufficiently powerful quantum computer running Shor's algorithm.
  18. Ok. My questions regarding your anti-quantum will not get any response then. Best of luck.
  19. This just raises further questions. Can you add sufficient details how anti-quantum relates to mainstream physics? What are the properties of anti-quantum?
  20. Can you define anti-quantum? (I'm vaguely familiar with the the term from cryptography but that is not applicable to this discussion.)
  21. @shivajikobardan just a word of caution, bipolar may mean different things in different contexts and not necessarily be identical to "binary": (bold by me)
  22. Do you handle similarities in spelling and structure in the grouping and comparison? Without further research I think the Swedish words would be 'kopp', 'mugg', 'bägare'. None of the words are identical in the two languages but share some kind of similarity? (Side note: I'll curiously follow this; it may be related to some machine learning stuff I've seen for other purposes, word embedding; the representation of words for text analysis, typically in the form of a real-valued vector that encodes the meaning of the word such that the words that are closer in the vector space are expected to be similar in meaning. It seems related to the above idea and is used to model similarities within a language AFAIK rather than between different languages. I'll need to read some more to provide comments about translation / language similarities & word embedding)
  23. I noted today that the report function may have changed. Previously a post was locked for report once a report was made. Now it seems possible for a member to report a post multiple times. Don't know if this is an intentional change or if reports are not working, resulting in possibility to retry (there is no error message though). Or something else; hard to tell without insight.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.