Jump to content

Tristan L

Senior Members
  • Posts

    45
  • Joined

  • Last visited

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Tristan L's Achievements

Quark

Quark (2/13)

2

Reputation

  1. Since several of you have told me you can more easily read my arguments about entropy if I don't use Þorn ('Þ'/'þ') and Ðat ('Ð'/'ð'), I'll go back to using "th" in my physics arguments for now. As this is a physics forum, I've transfered the discussion about letter use to a thread in the Other Sciences forum (I've found no forum specifically for speechlore). Only in the following paragraph do I use Ðat and Þorn, but you can skip it if you wanna get to the intrysting part, aka the physics. Again, swansont locked my entropy þread, saying ðat my statement ðat I wield ðe English alphabet be a bad faiþ argument in his estimation. Well, his estimation is clearly wrong, so his repeated closure of my þread is not justified. Ðerefore, I reopen it hereby yet again. Furðermore, I demand ðat he take back ðe penalty point he's unrightly given me for supposedly making bad faiþ arguments. Anoðer þing: Nobody gets to tell me which letters to use, but I do listen to good faiþ arguments, and several of you have given me such arguments for switching back to using "th", pointing out ðat ðe currently not-yet-widely used letters distract ðem from the content of my arguments. I've answered your points in ðe aforementioned þread in ðe Oðer Sciences forum. Ðere as well as here, I've taken your advice to hold back from wielding Þorn and Ðat for now. Wið ðat out of ðe way, let's delve into ðe physics and maþematics of entropy! 😀 ❕ Please mark that I use “partition of S” in the set-theoretic sense, that is, to mean a set of subsets of S which are non-empty, pairwise disjoint, and together cover S. Thus, “partition of S” refers to a subset of the power set of S which fulfills certain conditions. For instance, {{1, 2, 3}, {4, 5}, {6}} is called “a partition of {1, 2, 3, 4, 5, 6}”. Choosing which subsets of a state space count as macrostates amounts to choosing a partition of the state space. The state space (e.g. phase space in Classical Mechanics) of a system is the set of all possible states (called "microstates") of the system. We assume that to each microstate, z, belong a probability, which we call "pr(z)". Of course, this is a highly non-trivial assumption IMHO, as in truth, we have only probabilities of going from one state to another, but let's suppose it be meaningful to say "the probability of a microstate". Then for any subset M of state space, "the probability that the system be in M" and "Pr(M)" refers to the probability that the system have a microstate lying in M, which = the sum of the probabilities of the members of M. For any subsets M, N of state space, the conditional probability of N given M is denoted by "Pr(N | M)". For each subset M of state space, "the entropy of M" and "S(M)" are wielded to mean -Σz ∈ M Pr({z} | M) * log2 Pr({z} | M), that is, (how much more information we'd have about the system if we knew which microstate in M it has than if we knew only that its microstate is in M) if the microstate of the system lie in M. If all microstates be equally likely, the formula for the entropy of a subset M of state space reduces to S(M) = -log2 1/#M . Okay, so what's the entropy of the system? Well, that depends on which subsets of the state space of the system count as macrostates. Given a partition Ma (e.g. {{1}, {2, 3, 5}, {4, 6}}) of state space (e.g. {1, 2, 3, 4, 5, 6}), we use "entropy of the system with regard to Ma" to mean the entropy of the macrostate in which the system is, that is, the entropy of the element (e.g. {2, 3, 5}) of Ma which holds the system's microstate (e.g. 2). The key point which I made more than three-and-a-half years ago in my thread Will entropy be low much of the time? is that as we've just seen, the entropy of the system depends on how we divvy up state space into macrostates, i.e. on which subsets of state space we count as macrostates. I believe Sabine Hossenfelder made the same key point in her video I don't believe the 2nd law of thermodynamics about half a year ago. Both she and I also point out that currently, the Universe's entropy with respect to a certain partition (call it "Mahuman") of state space is low, so currently, living beings, including humans, are thriving who split state space up into the members of Mahuman. As I understand it, the 2nd Law says that for each partition Ma of state space, the entropy with respect to Ma is high most of the time. Thus, it's very likely that the Universe's entropy with regard to Mahuman will become so high and stay that way for a very, very long time that humans, coleoids asf. can't live during that while. However, at each time, there's a partition of phase space with regard to which the entropy of the system is low. So in a googol years, the Universe's entropy w.r.t. Mahuman may be high, but it will be low w.r.t. some other partition, say, MaChubachaba. So in googol years, living beings (call them "Chubachabas") can evolve for whom the members of MaChubachaba are the relevant macrostates of the Universe. Thus, the Universe is likely to always harbor life of some kind. What are your thoughts on this matter?
  2. This thread has been spawned by a discussion I sparked in my thread on system entropy being relative by my use of the English letters Þorn (big: ‘Þ’, small: ‘þ’), which stands for the voiceless dental fricative, and Ðat (uppercase: ‘Ð’, lowercase: ‘ð’), which represents the voiced dental fricative. I meant to talk about physics there, not speech, so I’ve taken the linguistic discussion thence hither. You made some intrysting points in that other thread, so I'll respond to them here. Markus Hanke wrote: I disagree. While I know that Icelandic distinguishes these sounds in phonology and orthography, English effectively doesn’t, so there’s no point in this at all. Firstly, English phonology does distinguish between /þ/ and /ð/. Or would it be okay to say: "I ðank you for ðinking about þe question of wheþer to put ðorns of þat fence."? Secondly, even if English didn't distinguish between /ð/ and /þ/, it definitely does distinguish between these two sounds on one hand and /t/ followed by /h/ on the other. Not using 'th' for /þ/ or /ð/ makes it clear that "meathook", for instance, is spoken out like "meet-huk" rather than "me-ðuk" or "me-þuk". Thirdly, even if English didn't distinguish between /ð/ and /þ/ and the 't'-'h' issue weren't a problem, using a single letter for a sound is still more logical and efficient than using a two-letter combination. For example, replacing all instances of "th" which represent /þ/ or /ð/ by "þ" would still be more efficient. Genady wrote: That Wikipedia article says the opposite; it lists Þorn and Ðat as English letters. It also says these letters, along with several others, have fallen out of use, and this is precisely what I'm trying to turn back; not because they're old, but because using them is logical. In many ways, such as the case system and the letter system, older versions of English are better than the currently widespread one, so I'd like to bring the old features back. For instance, saying "Thou seest me", "I see thee", "Ye see me", and "I see you" is more precise than saying "You see me" and "I see you" for all of them. I often find it bothersome when I don't know whether someone uses "you" to refer to a single person or a group, so bringing back "thou" and company would definitely be useful. However, in other respects, older versions of English aren't as good as Modern English, and in these cases, I stick to the modern features. For example, Old English didn't have perfect tenses AFAIK (correct me if I be wrong) and so was less expressive in this respect. I want a speech to be as precise and expressive as possible. After these two criteria, I want it to be efficient and beautiful. These four criteria are what count for me, not age. It just so happens that old languages often score higher on them than modern ones do. For instance, Old English had different suffixes for different kinds of doers: "-el" mostly foor tools, "-þor" for tools and machines, and "-a" for people doing certain things, often if it's their job or habit. In Modern English, "-er" is used for everything, and the others are hardly or no longer productive or don't even exist anymore. Does "computer" mean a machine which computes, or a person who does? We have to learn by heart that it's the former. And does "driver" mean a device which drives, or a person who does? We have to learn by heart the latter. If we used "reckonþor" and "driva" instead, the meanings would be clear. Bryan A. J. Parry has written an article about "-el". (This is another matter, but I have to talk about it since I've brought Anglish up: I'm not for language purism, as I think it hinders progress. I'm for speech freedom and seek to shield native inborn words from being displaced by foreign ones, which often happens due to political, cultural or other forms of imperialism. I'm strongly against imperialism, be it political, cultural, or linguistic.) Another English suffix which is no longer productive is "-ol", which, unlike "-ing", means specifically leaning towards doing a certain action, not just doing it. Someone who is thinkol doesn't have to be thinking right now, but rather is wont to think a lot. Speaking of "-ing" ... this suffix is used both for forming the gerund, a nameword meaning the deed itself, and the present participle, a how-word or nominalized how-word describing something which does the deed. This vexes me a lot in philosophy, where I often don't know whether someone is using "being" to mean that which is, or the deed or state to be. It would be better to keep "-ing" for the gerund alone and wield "-ende" (or "-onde") for the present participle. We'd then say "Every living beonde has being" instead of "Every living being has being". Also, does "I like helping animals" mean that I like to help animals, or that I like animals who help? Wielding "I like helping animals" for the former and "I like helpende animals" for the latter would clear up the confusion. Other speeches, such as Arabic and German, don't suffer from this problem; for instance, "كَائِن" and "seiend" (adjective) and "Seiendes" (adjective made into noun) refer to that which is whereas "كَوْن" and "Sein" mean to be. But sadly, Arabic and German are deteriorating, too. Technology and culture have become more sophisticated, so why do we seem to see the opposite trend in speeches, ranging from Indo-European ones to Arabic? Genady wrote: Right, and I didn't want to talk about it in the physics thread. I wanted to write about entropy, not language, but I used the letters Thorn and That in doing so, which sparked this discussion. studiot wrote: Of course. In fact, English and German are West Germanic, while Old Norse is North Germanic. That was good 👍. Is it still the case today? If not, it should be again. studiot wrote: Maybe because letters and language are two different things. You can write Old English with the letters of the currently widespread English alphabet, just as you can write Modern English in runes or Arabic in Latin (+ Þorn and Ðat) letters and Arabic numerals: "Haaðihi jumlaton 3arabiyyaton maktuubaton bi-7uruufin laatiiniyya4." ("هَٰذِهِ جُمْلَةٌ عَرَبِيَّةٌ مَكْتُوبَةٌ بِحُرُوفٍ لَاتِينِيَّةٍ.") Likewise, some sites teaching Gothic use the Latin instead of the Gothic alphabet. studiot wrote: As said, language and letters are separate matters. The speech I'm using the whole time is Modern English, sadly with all its shortcomings. It's just that I used letters currently not widely in use, which can be a little cumbersome while getting used to them, I admit. joigus wrote: Of course. For instance, is "Wimshurt Machine" spoken out like "wim-shurst ma-sheen", or like "wims-hurst ma-sheen"? joigus wrote: (my emphasis) the "th" sound in "rather" is very different from "th" sound in "with." Since when? The "th" in "rather" and the "th" in "with" are both pronounced exactly like the "th" in "the": as the voiced dental fricative. So "with the shovel" is pronounced with a long /ð/. exchemist wrote: The only other currently non-standard character I used is Ðat ('Ð'/'ð'), which is also used in Icelandish. Markus Hanke wrote: Of course English spelling should be standardized to make communication easy. However, I believe the current standard should be replaced by a better one. We should get away from using letter strings, such as "th" for /þ/ and /ð/ and "sh" for /ʃ/, as this is 1. inefficient and 2. can lead to confusion, like with "Wimshurst Machine". Also, clearly different sounds, like /ð/ and /þ/, should be represented with different letters. In particular, I suggest that: F. That ('Ð'/'ð') be wielded for /ð/, as in "ðere" instead of "there", U. Thorn ('Þ'/'þ') be used for /þ/, as in "þeory" instead of "theory", Þ. Eng ('Ŋ'/'ŋ') be used for /ŋ/, as in "Mt. Ŋauruhoe" instead of "Mt. Ngauruhoe" and "siŋer" rather than "singer", which would also allow distinguising /ŋ/, as in "singer"/"siŋer", from /ŋg/, as in "finger"/"fiŋger", A. Esh ('Ʃ'/'ʃ') be wielded for /ʃ/, as in "Ʃark" rather than "Shark" and "fiʃ" instead of "fish", R. Hwair ('Ƕ'/'ƕ') or at least "Hw" and "hw" be used for /hw/, as in "ƕine" or "hwine" rather than "whine", which is not to be mixed up with "wine", K. and the glottal click, /ʔ/, be written with the glottal click letter, 'Ɂ'/'ɂ', as in "ɂan ɂapp" and "ɂa nap" rather than "an app" and "a nap". Spelling, and language broadly, isn't governed by natural laws. We choose how we speak, and each of us can contribute to steering the evolution of speech. I have given good reasons, I believe, for the changes I hope to see in the future. And every change must start somewhere, so I chose to begin by using Ðat and Þorn. I'm open to others' suggestions to make our speech better. For instance, in his video The Iodine Myth, NileRed points out that 1. a liquid turning gaseous below the boiling point, 2. a liquid becoming a gas at the boiling point, 3. a solid becoming a gas below the boiling point, and 4. a solid turing gaseous at the boiling point are four different processes but that we have only three different names for them: "boiling", which means only (2.), sublimation, which is sometimes used for (4.) alone but othertimes for both (3.) and (4.), and "evaporation", which some wield to mean only (1.) and others to refer to (1.) and (3.). He proposes "sublimation" be wielded for only (4.) and "evaporation" for (1.) alone and came up with a new word, "nilation", for (3.). He rightly says that "if we all started to use a new term, whether it's "Nilation" or something else, we could probably eventually change things". Since I agree with his arguments, I've adopted his term "nilation". I believe we shouldn't just accept shortcomings of the language we speak. Instead, we should change the speech for the better, especially in expressiveness (which we can also call "outþrutcholness"), precision, beauty, and efficiency. That isn't silly. However, it's true that changing the language takes a bit of effort, as some of you have implied: Markus Hanke: exchemist: studiot: That's why I've gone back to wielding "th" for now. In fact, when I first wrote in currently non-standard letters, I found both writing and reading my own texts a bit hard. However, I was surprised by how quickly I got used to it. Now, I can easily read and write a sentence which contains all the letters I mentioned in (F.) to (K.) above in addition to the 26 letters of the currently standard English alphabet: "Ƕat streŋgþ ðat ʃrewd brown fox has to jump quickly ɂover ðe lazy dog!" I've written many comments of science videos on YouTube using That and Thorn and was surprised by how little those who answered me seemed to pay attention to may as-of-yet non-standard use. All those whom I currently remember to have commented on my use of Thorn and That liked it. So it seems one can get used to better spelling quickly. There's another issue, which Markus Hanke pointed out: That's indeed true of current English keyboards. I hope future English keyboards will have the extra letters I've mentioned, in particular Thorn and That. Moreover, you can easily switch to the Icelandic keyboard to get That and Thorn. If you type with an Android phone, you can keep the English keyboard and get Thorn by pushing and holding the T key, That by pressing and holding the D key, and Eng by pushing and holding the N key. If you write in Microsoft Word, where I write many of my posts before pasting them hither, you can keep the English keyboard and use codes to enter any Unicode token: Write the Unicode point of the desired token ("de" for 'Þ', "fe" for 'þ', "d0" for 'Ð' and "ð" for 'ð'), then push and hold the ALT key and press the X key. This turns the hex code into the corresponding Unicode character. If an Arabic numeral or a capital or small letter from 'A' to 'F' comes immediately before, you have to either have selected the hex code or put a space before it when pushing ALT+X. It might sound complicated, but when you've gotten used to it, as I have, you can write very long texts very quickly with any Unicode characters you want. I've memorized over a dozen hex codes. Anyway, considering that you still find wielding That and Thorn cumbersome and that I'd like to read your opinions about Hossenfelder's and my take on entropy, I'll continue my discussion without using these two beautiful letters for now, though I may slip a tiny That or Thorn in here and there, so be careful to not be pricked by one of the latter 😉.
  3. Hello guys, It’s I again, Tristan Laguz. In my þread Will entropy be low much of the time?, I talk about a point boþ Sabine Hossenfelder and I have made: ðat ðe entropy of a system isn’t defined absolutely, but raðer wið regard to a choice of which subsets of state space count as macrostates. In ðe later part of ðe topic, I use ðe English (and Norse) letters Ðat (‘Ð’/‘ð’) and Þorn (‘Þ’/‘þ’) because of ðeir aesþetic appeal and because using ðem is þrice as precise and twice as fast as using “th”. Oddly, swansont took issue wið my use of ðese letters. First, he said I should use English in ðe forum, to which I replied ðat I did, in fact, write in English. Moreover, I had already justified my use of Þorn and Ðat. Swansont didn’t engage wið my arguments. Neiðer did he nicely ask me to use “th” instead of ‘Þ’ and ‘Ð’ because it’s currently more widespread even ðough it’s less precise and less efficient. Instead, he simply locked my topic, accusing me of making bad-faiþ arguments. Making such a false accusation and creating such a lie about me is unacceptable, ðough not overly surprising given swansont’s self-description above his profile picture 😉. Since my arguments in my þread are made in good faiþ and, I believe, valid and sound on top of ðat, I herewið open my topic anew to talk wið you about wheðer my aforesaid belief (ðat my arguments be valid and sound) be true. ❕ Before you read on, please mark ðat I use “partition of S” in ðe set-þeoretic sense, ðat is, to mean a set of subsets of S which are non-empty, pairwise disjoint, and togeðer cover S. Ðus, “partition of S” refers to a subset of ðe power set of S which fulfills certain conditions. For instance, {{1, 2, 3}, {4, 5}, {6}} is called “a partition of {1, 2, 3, 4, 5, 6}”. Choosing which subsets of a state space count as macrostates amounts to choosing a partition of ðe state space. I add someþing in ðe beginning which I wanted to write ðe last time but forgot: Sethoflagos wrote: You’re right. IMHO, Hossenfelder is wrong to claim ðat ðe system stay in ðe same microstate. Now follows ðe last part of ðe discussion in ðe closed topic and a part shortly before it: Ðe letter Ðat (uppercase: 'Ð', lowercase: 'ð') stands for ðe 'th'-sound in "that", "then", "there" aso. whereas ðe letter Þorn (big: 'Þ', small: 'þ') represents ðe 'th'-sound in "thorn", "think", "thank" asf. Wielding 'ð' and 'þ' is þrice as precise as using 'th', since it distinguishes ðe /ð/ sound from ðe /þ/ sound and furðermore avoids mixing ðem up wið a /t/ sound followed by a /h/ sound. It is also double as efficient, because it uses only half as many letters. Ðis makes it six times as good (i.e. five times better). I wrote: "Ðe entropy of a system [...] has to be defined wið regard to a partition (mark ðat I use ðe linked-to set-þeoretic definition of “partition”) of [...] ðe set of all possible microstates of ðe system." Nope, I really do mean macrostates wið respect to P. A macrostate wið respect to P is simply an element of P, where P is a partition (again, please mark ðat use I ðe set-þeoretic definition of "partition") of state space. Ðe elements of P are non-empty pairwise disjoint subsets of state space which togeðer cover state space. So a macrostate is a subset of state space. By contrast, a microstate is an element of state space. Wiðout reference to an arbitrarily chosen partition of state space into macrostates? 🤨 Observable by whom? Ðat's ðe key point which Hossenfelder and I have made, contrary to: We've said ðat currently, ðe entropy of ðe Universe wið respect to a certain partition, call it "Phuman", of state space is low. Accordingly, ðe Universe is currently teeming wið living beings (at least on Earþ) who divvy state space up into ðe elements of Phuman. In accordance wið ðe Second Law, entropy wið regard to Phuman will very likely rise until living beings, such as humans, for which ðe macrostates wi.re.to Phuman matter can no longer live. It will ðen very probably take a very, very long time until entropy wi.re.to Phuman is low again. However, when entropy wi.re.to Phuman is high, entropy will be low wi.re.to some oðer partition, e.g. PChubachaba, so living beings (like Chubachabas 😉) who split state space up into ðe members of PChubachaba will be able to live. Let's say we have a very simple system wið just six states. Number ðem 1 to 6. Ðe state space of ðe system is ðe set {1, 2, 3, 4, 5, 6}. If ðe system is currently in state 4, what is its current entropy? Ðe question doesn't make sense. First, we have to break state space down into macrostates, e.g. into {1}, {2, 3, 5}, and {4, 6}. By choosing {{1}, {2, 3, 5}, {4, 6}} as our partition of {1, 2, 3, 4, 5, 6}, we've just categorized ðe microstates according to primality: {1} is ðe macrostate holding all microstates which are neiðer prime nor composite, {2, 3, 5} is ðe macrostate holding ðe prime microstates, and {4, 6} is ðe macrostate containing ðe composite ones. Now, we can ask: What is ðe current entropy of ðe system wið respect to ðe partition {{1}, {2, 3, 5}, {4, 6}}? Ðe answer is -SUMz in current macrostate P({z}|current macrostate)*log2P({z}|current macrostate) = -SUMz in {4, 6} P({z}|{4, 6})*log2P({z}|{4, 6}), which = -log2(1/2) = 1 given ðat all microstates be equally likely. Likewise, if ðe system was in ðe microstate 3 a second ago, its entropy wið regard to {{1}, {2, 3, 5}, {4, 6}} a second ago was = -log2(1/3) ≈ 1.58, and if it was in microstate 1 two seconds ago, its entropy wið respect to {{1}, {2, 3, 5}, {4, 6}} back ðen was = -log2(1/1) = 0. But who says we have to choose {{1}, {2, 3, 5}, {4, 6}} as our partition? Nobody. It's just ðat (in ðis example), we happen to be living beings who care about primeness. However, living beings who care about being a power of 2 would divvy ðe state space up into {2, 4} and {1, 3, 5, 6}, ðat is, choose {{2, 4}, {1, 3, 5, 6}} as ðe partition wið regard to which ðey define entropy. Ðe key point Hossenfelder and I have made is ðat at each time t, ðe system is in a microstate, and for each z, ðere's a partition, P, of state space such ðat for ðe member (a macrostate), M, of P which contains z, ðe likelihood of {z} given M is high, so if ðe system has microstate z, its entropy wið regard to P is low. Not only is ðe speech I use English, but ðe letters I'm writing are English letters. Wið regard to which partition of state space into macrostates? I agree; in a deterministic universe, each probability is eiðer 0 or 1, so ðe entropy is always 0 ... ðough ðis is in accordance wið ðe 2nd Law, of course. Ðat's correct and you're right. Ðe entropy of a macrostate is defined absolutely. However, when we ask about ðe entropy of ðe system, we have to ask: Which subsets of state space count as macrostates?
  4. Observable by whom? Ðat's ðe key point which Hossenfelder and I have made, contrary to: We've said ðat currently, ðe entropy of ðe Universe wið respect to a certain partition, call it "Phuman", of state space is low. Accordingly, ðe Universe is currently teeming wið living beings (at least on Earþ) who divvy state space up into ðe elements of Phuman. In accordance wið ðe Second Law, entropy wið regard to Phuman will very likely rise until living beings, such as humans, for which ðe macrostates wi.re.to Phuman matter can no longer live. It will ðen very probably take a very, very long time until entropy wi.re.to Phuman is low again. However, when entropy wi.re.to Phuman is high, entropy will be low wi.re.to some oðer partition, e.g. PChubachaba, so living beings (like Chubachabas 😉) who split state space up into ðe members of PChubachaba will be able to live. Let's say we have a very simple system wið just six states. Number ðem 1 to 6. Ðe state space of ðe system is ðe set {1, 2, 3, 4, 5, 6}. If ðe system is currently in state 4, what is its current entropy? Ðe question doesn't make sense. First, we have to break state space down into macrostates, e.g. into {1}, {2, 3, 5}, and {4, 6}. By choosing {{1}, {2, 3, 5}, {4, 6}} as our partition of {1, 2, 3, 4, 5, 6}, we've just categorized ðe microstates according to primality: {1} is ðe macrostate holding all microstates which are neiðer prime nor composite, {2, 3, 5} is ðe macrostate holding ðe prime microstates, and {4, 6} is ðe macrostate containing ðe composite ones. Now, we can ask: What is ðe current entropy of ðe system wið respect to ðe partition {{1}, {2, 3, 5}, {4, 6}}? Ðe answer is -SUMz in current macrostate P({z}|current macrostate)*log2P({z}|current macrostate) = -SUMz in {4, 6} P({z}|{4, 6})*log2P({z}|{4, 6}), which = -log2(1/2) = 1 given ðat all microstates be equally likely. Likewise, if ðe system was in ðe microstate 3 a second ago, its entropy wið regard to {{1}, {2, 3, 5}, {4, 6}} a second ago was = -log2(1/3) ≈ 1.58, and if it was in microstate 1 two seconds ago, its entropy wið respect to {{1}, {2, 3, 5}, {4, 6}} back ðen was = -log2(1/1) = 0. But who says we have to choose {{1}, {2, 3, 5}, {4, 6}} as our partition? Nobody. It's just ðat (in ðis example), we happen to be living beings who care about primeness. However, living beings who care about being a power of 2 would divvy ðe state space up into {2, 4} and {1, 3, 5, 6}, ðat is, choose {{2, 4}, {1, 3, 5, 6}} as ðe partition wið regard to which ðey define entropy. Ðe key point Hossenfelder and I have made is ðat at each time t, ðe system is in a microstate, and for each z, ðere's a partition, P, of state space such ðat for ðe member (a macrostate), M, of P which contains z, ðe likelihood of {z} given M is high, so if ðe system has microstate z, its entropy wið regard to P is low. Not only is ðe speech I use English, but ðe letters I'm writing are English letters. Wið regard to which partition of state space into macrostates? I agree; in a deterministic universe, each probability is eiðer 0 or 1, so ðe entropy is always 0 ... ðough ðis is in accordance wið ðe 2nd Law, of course. Ðat's correct and you're right. Ðe entropy of a macrostate is defined absolutely. However, when we ask about ðe entropy of ðe system, we have to ask: Which subsets of state space count as macrostates?
  5. Ðe letter Ðat (uppercase: 'Ð', lowercase: 'ð') stands for ðe 'th'-sound in "that", "then", "there" aso. whereas ðe letter Þorn (big: 'Þ', small: 'þ') represents ðe 'th'-sound in "thorn", "think", "thank" asf. Wielding 'ð' and 'þ' is þrice as precise as using 'th', since it distinguishes ðe /ð/ sound from ðe /þ/ sound and furðermore avoids mixing ðem up wið a /t/ sound followed by a /h/ sound. It is also double as efficient, because it uses only half as many letters. Ðis makes it six times as good (i.e. five times better). I wrote: "Ðe entropy of a system [...] has to be defined wið regard to a partition (mark ðat I use ðe linked-to set-þeoretic definition of “partition”) of [...] ðe set of all possible microstates of ðe system." Nope, I really do mean macrostates wið respect to P. A macrostate wið respect to P is simply an element of P, where P is a partition (again, please mark ðat use I ðe set-þeoretic definition of "partition") of state space. Ðe elements of P are non-empty pairwise disjoint subsets of state space which togeðer cover state space. So a macrostate is a subset of state space. By contrast, a microstate is an element of state space. Wiðout reference to an arbitrarily chosen partition of state space into macrostates? 🤨
  6. I’ve finally got ðe answer to my titular question, and þankfully, it’s “Yes, in a way”: In her video I don't believe the 2nd law of thermodynamics. from about five monþs ago, Sabine Hossenfelder made ðe same key point 💡 I made more ðan þree-and-a-half years ago in my topic Will entropy be low much of the time? and in a comment on PBS Space Time’s video The Misunderstood Nature of Entropy. Ðe point in question is ðis: Ðe entropy of a system can’t be defined absolutely. It has to be defined wið regard to a partition (mark ðat I use ðe linked-to set-þeoretic definition of “partition”) of ðe state space of ðe system, ðat is, of ðe set of all possible microstates of ðe system. Ðe elements of a partition P of ðe state space are pairwise disjoint subsets of state space whose union is state space. Ðey are called “macrostates wið respect to P”. As I understand it (please set me right if I be wrong), ðe correct version of ðe Second Law of Þermodynamics says ðat for each partition P of ðe state space, ðe entropy wið respect to P is high more often ðan it is low. (So while ðe 2nd Law says entropy will likely not get lower going into ðe future, it also says entropy will likely not get lower going into ðe past, as I understand Dürr and Teufel explaining on page 90 of ðeir book Bohmian Mechanics: The Physics and Mathematics of Quantum Theory.) However, at each time point t, ðere is a partition Pt wið regard to which ðe entropy is low at t. So at each time t, organisms can live for whom ðe elements of Pt are ðe relevant macrostates. Ðus, it seems ðat ðe spectre of ðe heat deaþ has been dispelled 😃 🎆. What are your þoughts on ðis matter? 🤔
  7. Oh, good to know, thanks! That explains a lot; of course it can be awaited from a defunct site to give blatantly false info: that the rabies virus (an RNA virus) be one of the poxviridae (which are DNA viruses), and that the rabies virus be rather hardy, when in truth it's quite fragile (thankfully 😅).
  8. Hi everyone, On the page http://www.askabiologist.org.uk/answers/viewtopic.php?id=6389 of the website http://www.askabiologist.org.uk, someone who claims to be Christopher LaRock asserts that the rabies virus be a poxvirus; he writes: "Poxviruses such as the one that causes rabies". Now, Poxviridae are DNA viruses while the rabies virus is an RNA virus, and the two viruses belong to two different realms (Varidnaviria and Riboviria, respectively), so they are about as far apart from each other as viruses can be if I understand it right. Therefore, the assertion of the answerer on http://www.askabiologist.org.uk who claims to be Christopher LaRock is fully false. This suggests to me that http://www.askabiologist.org.uk isn't very reliable. Have I deemed correctly? The real Christopher N. LaRock is an Assistant Professor of Medicine at Emory University, so I would be very suprised if he made such a blatantly false claim as that one of the poxviruses cause rabies. Does http://www.askabiologist.org.uk check that its answerers really are the academics they claim to be?
  9. Thank you all for your interesting and information-giving answers! @zapatos Of course no planning is involved, that's correct. It's just easier to say the short and onefold/simple "Feature A evolved so as to do function F" than the cumbersome "By random chance, some individuals had A and others didn't, and since A does the bootful/advantageous function F, individuals with A-giving genes spawned more successfully/spowfully than individuals without them, and so A-giving genes became commoner in the population over time". I'm asking why we didn't evolve to give birth earlier. Of course it could be accidental, but since the fit of the baby's head size to the mother's birth canal size is a tight one, there's a good chance that there's a reason for this. Is it that smarter brains don't simply go through stages that correspond to less smart ones? That is, might it pehaps be the case that the human brain never is at a chimpanzee brain's level - when the human brain is as functional as the chimpanzee's, it's already smarter, and when it's as smart as the chimpanzee's, it's not yet functional enough?
  10. That ambiguity in meaning of words can indeed be very hindering. A much better word imho is German/Theech "Zerlegung", which unmistakably means that which is meant by English "partitioning" (should we use that word from now on for the mathematical concept?) or "sectioning" or something like that. This brings us to the minor side-issue of speech: I don't mean to mock or anything; I just have a side-hobby of bringing back English's true potential, and that includes brooking/using truly English words, for byspel "byspel", which is the proper English word for "example" and cognate/orbeteed to German "Beispiel". I brook this proper English on purpose/ettling where it's not the object but only the tool of talking; after all, that's the ord/point of speech. But again, that's just a hobby of mine. This is a very intrysting problem indeed 🤔. I'd say that it evolves as follows: If the pressure/thrutch is the same on both sides of the resting piston, nothing will happen. Otherwise, 1. the piston will start to go from the high-thrutch side to the low-thrutch side. 2. As it goes in that direction, internal energy of the high-pressure gas is transferred to kinetic energy of both gases and internal energy of the low-pressure gas. 3. When both thruthes become equal, the piston goes on shrithing/moving thanks to the inertia of the gases. 4. Now, the kinetic energy of the gases and internal energy of the former high-thrutch gas (now the low-pressure gas) are transferred to internal energy of the former low-thrutch gas (now the high-pressure one), slowing down the piston, until 5. the piston is at rest again. Now, the whole ongoings repeat in the other righting/direction. Entropy doesn't change/wrixle during the whole process, so the Second Law of Thermodynamics is of no brook/use. But of course, we have another Second Law, namely that of Newton, and this one helps us further here. Actually, I believe that we shouldn't find this too surprising 🤔, for there are other systems which wrixle/change although their entropy stays the same, e.g. a frictionless pendulum swinging. Mark that the system doesn't have to be periodic, I think; for instance, two bodies shrithing/moving at not-zero relative speed forever in space (forget about gravitational waves) make up such a system.
  11. Not at all, you quite misunderstood; I would never dare to lay down the law for anything and never will; I only repeat 8th-grade mathematical definitions and knowledge. In truth, this statement of yours is simply incorrect. Could you please enlighten me as to the sinn/sense of what you've written there? Set theory does indeed not need every set to only contain sets (though ZFC does actually rule out ur-elements for convenience), but a *partition* does indeed only contain sets as elements, namely disjoint subsets of the ground-set. I brook/use the words "element" and "member" in exactly one and the same meaning. Oh, I thank you for the info, but my friend's 13 year old cousin already told me today morning. 😉 {1} is not a partition of S; it's a member/element e.g. of {{1}, {2, 3, 4}} and of {{1}, {2, 3}, {4}}, which in turn are partitions of S; accordingly, it's an underset of S. It's funny that you borrowed my lines which I was about to write to you 🤣:
  12. I think you misunderstand the meaning of disjoint in set theory. This should be cleared up prior to any other consideration. We should indeed clear up any misunderstandings before we go on, so we'll do that right now: For any sets S, T, "S is disjoint with T" means that S and T share no elements in common. For any sets P, S, "P is a partition of S" means that all members of P are subsets of S and any two members T, B of P are disjoint, th.i. share no elements of S in common, and the union of all members of P is S. Now to my above quote, which I'll sweetle/explain with the help of a byspel/example: {1, 2, 3, 4} is our groundset. All three of the following are partitions of {1, 2, 3, 4}: {{1, 2}, {3, 4}} {{1, 3}, {2, 4}} {{1}, {2}, {3, 4}} The first and the second are disjoint since they have no elements in common, but the first and the third one are not, for they have the member {3, 4} in common.
  13. I was hoping that you'd actually read and try to understand what I've written, at least the last post of mine, where I sweetle very clearly that I'm not talking about partitioning physical 3D space, but rather PHASE-ROOM. All I'm saying about disjointness is that macro-states are eachotherly disjoint sets of microstates; can we agree/forewyrd on that? Also, what exactly do you have in mind when you talk of disjointness?
  14. @studiot, could you please at least try to understand what I'm saying 🙂 before you behaving as if you know everything about the matter? It's just like with my entropy-thread, where you appear to know the matter at hand very well and seek to graciously sweetle/explain it to me, but in fact understand very little what I'm even talking about.
  15. Of course the human baby's brain needs to have some basic ability at birth, but for that basic functionality, such a huge baby brain is a total overkill, isn't it? A chimpanzee baby's brain is also up to the job, so why don't human mothers give birth once their babies have chimpanzee-level intelligence, and then the babies' brains grow to grown-up human proportions as the child grows ip? Indeed, a reptilian brain is enough to perform all the bodily functions, so what would be wrong with giving birth while the human baby still has reptilian-level intelligence? All the higher functions, e.g. bonding, can come later on, can't they?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.