Jump to content

Halc

Senior Members
  • Posts

    167
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Halc

  1. You're asking two different questions: What is the most likely value of s when all of the list has been selected at least once? At which value of s will it be more likely than not that all of the list has been selected? The answer to the first question is seems necessarily lower than the answer to the second. Do you see why? I don't see an obvious way to derive either value right away.
  2. It's a top, simple as that. Not clockwork at all. We had these as kids. They went under the brand name 'whizzers' or some such. This one is decorated as a bug. The wheels underneath drive an internal vertical-axis flywheel. You'd get quicker responses to your questions if the title actually gave any clue as to what the topic is about. You seem to give the same title to all your topics, however unrelated.
  3. You got it all wrong Bufofrog. What kind of fool is going to believe that? Everybody knows it was last Tuesday, not Wednesday.
  4. People can produce enough thrust to fly here on Earth, but it takes an athlete. Don't know the current record, but somebody flew over the English channel with human-powered flight. So sure, if you can do it at 1g, you can do it at lower g with less effort. I think your mistake is presuming it is just a matter of strapping on winds and arm-flapping. But our arms are situated nowhere near our center of gravity, so instead of flying horizontally like the pictures depict, you'd hang vertically if all your weight was born by your arms. I suppose it could work, but the aerodynamics would be horrible. All gliding done by humans is a different setup designed to support you everywhere, hence the funny flying-squirrel setup that the base jumpers use. I suppose that one could generate thrust by a more 'flappable' version of one of those. That (gasoline, electric, etc) would be powered flight. We also have that here on Earth, so again, doing it at low g would be easier. You can also strap on a rocket, something else a human can do here on Earth. What if the pressure was much higher? How much easier/harder would it be to fly at say 1g or less with thicker air. It has more mass to support you, but more drag as well. If it's dense enough, the buoyancy alone would be enough.
  5. The velocity of the universe would be the change in its position relative to something that isn't the universe. It really makes no sense for a universe to have a velocity.= Velocity per second per megaparsec would be something like v/t/d (velocity, time, distance) which is d/t / t / d which is 1/t², something with different units than velocity at least. As for special relativity, that only applies to flat Minkowskian spacetime with zero energy and mass anywhere. For this reason, it is simply inapplicable to our universe except locally. Your equation adds values of different units, which makes it meaningless. You can't add meters to Pascals, and (in your case), you can't add H (units 1/t) to V/γ (units d/t) Another criticism: γ is the Lorentz factor of what exactly? It should be the factor for some speed, but "the Lorentz factor which is used in incorporating special relativity into the equation and using the laws of special relativity" is just a word salad.
  6. Aspect dualism is effectively a violation of known physics. Such a violation would be required for any agency that is not tethered to physical causality. A non-deterministic interpretation of quantum mechanics does not open the door for this sort of thing since willed agency cannot emerge from the probabilistic description of quantum theory. One that claims such a relationship would need to demonstrate some structure in the privileged arrangement of matter (probably just humans) where a signal is generated purposefully (not randomly) without cause. There's no structure in a person that is seemingly designed to do anything like that. All structures at any scale seem to be evolved for repeatable (deterministic) operation, just as are transistors. I'm sure. Humans are excellent at rationalizing what they want to be true. Most are not so great at actual rational thought. Hence all the 'proofs' that not only does God exist, but a proof of exactly my version of God. All those proofs (but one at most) must be wrong, and even that statement doesn't pass rational analysis. This doesn't seem to make sense. It seems to suggest that MWI supports the existence of worlds where MWI is wrong, since MWI does not hold to PCD. None of those seem to be examples of counterfactuals. They're all states measured by Bob and/or Alice. The (one) universe as a whole is deterministic under MWI. I don't think the concept of a given world being deterministic is accurate there. It cannot be since any given world is the result of random events. Yes, Alice & Bob hook up in some worlds and not in others. In other worlds, one or both don't even exist. But as a whole, all those states exist in superposition. I don't think that qualifies as a counterfactual of any kind. Saying the cat is dead is a counterfactual. Saying the cat is in superposition of dead and alive is not. Then the robot should predict the indecision and play anything (the game does have a time limit you know), instead of losing by default. You've just pretty much proved Godel's theorem about the halting problem. It cannot do down either a single path or a finite distance. Events (such as my choice of footwear today) depend on many causes (the weather being but one), and those causes themselves have to have come about due to other states even prior. That 'chain' spreads both in width and depth all the way back to the big bang. This is true under determinism or not. The difference is that under nondeterminism, some of those causal branches stop. A choice I make might be partially a function of a beta decay somewhere. That particular piece of state (among the myriad of states that contributed to my choice) was uncaused, a true random occurrence under nondeterministic interpretations. The robots playing RPS is not a single event. I suppose their eventual choice is, but there's a lot of state that potentially contributes to its eventual choice (or lack of it). If the robots are identically constructed, then it would be like you trying to win RPS with your own reflection. I don't understand how any of that aids in the question of 'what definition of determinism is in play'. The definition is something along the lines of the complete lack of randomness: That identical closed systems in a given state will evolve the exact same way every time. Was there another definition that is fundamentally different than that? Only some systems exhibit this. You drop successive grains of sand from a fixed point and which way a given grain goes is fairly unpredictable, but the eventual conical hill of sand is very predictable. Most systems are chaotic, under which small perturbations result in macroscopic differences. The weather, the formation of galaxies from a uniform early state, are examples of this. Take the state of Earth just after the Theia event. From that state, life is unlikely to form, and if abiogenesis does occur, it will most improbably evolve into anything that would be recognized as a mammal.
  7. It indeed seems not the case. The stanford page on libertarianism doesn't even mention determinism at all. I stand corrected on that point. There's the compatibilist view I suppose, but it requires a sort of soft determinism. Now I really wonder why 'free will' is a desirable thing.
  8. I do believe the libertarians assert both free will and determinism, that the one is possible despite the other. Your stance, at last what I can make of it, doesn't look like that. I for one would I suppose qualify as libertarian, but only because I define free will in such a way that is 1) completely compatible with determinism, and 2) actually something that is desirable. A more typical definition of free will seems like something I'd not wish to have, and it is indeed often incompatible with determinism. Build two machines that play rock paper scissors. They are constructed so that each uses a completely deterministic in algorithm, and each has full access to the state and programming of the other. If the behavior can be predicted because it is deterministic, then each robot can predict the move of the other and always win. Since the machines can't both win, predictability cannot be had despite the deterministic nature of the situation. Godel did a simpler proof, but that one is a bit more on topic. I don't understand these comments or the relevancy to my comments to which they replied. Perhaps you don't grok what I said. I admit the lack of elegance in my conveying it. I think any non-local interpretation can be (but not necessarily is) consistent with counterfactual definiteness. Does determinism depend on counterfactuals? MWI for instance is considered fully deterministic but denies counterfactuals, but I'm not sure how a wave function can be properly expressed in the absence of counterfactuals. I lack the expertise to resolve that. You also mention a block universe, but I don't think a block universe necessarily implies determinism, so I don't see the relevance between one interpretation of time or another.
  9. QM shows that it isn't predictable, but until the fully deterministic interpretations are falsified, QM does not kill determinism. If you can go back even a short time, then that prior state must be fully determined by the state shortly prior to that, and so on... Hence if we can go back a little and retain determinism, then we can go back all the way to the beginning. Of course there's no evidence of this short term determinism. For one, it presumes a meaningful state of a system, which is a counterfactual, and few interpretations of QM support counterfactuals. Don't confuse determinism with predictability. One can have a nice classical fully determined universe (such a Newton might have envisioned) and it would still not be predictable. It's pretty easy to show that. That's a good example of an uncaused thing. Bohmian mechanics (which supports counterfactuals) would I think assign hidden variables to the neutron system and a thousand identically set up systems would all decay after the same duration. MWI (also deterministic) would say that it decays after every possible duration. Copenhagen simply says we cannot know. Most of the others say something on the order of it occurring at some random time, which in some cases is 'God rolling dice'. None of the modern deterministic interpretations were out there during Einstein's time, and he seemed like a determinist type to me, so that's too bad. There are people (at least one of whom is contributing to this topic) which seem to spin a deterministic universe in a bad light, like it is somehow a thing to be avoided if possible, especially for decision making. I don't understand this aversion. I cannot conceive how a better decision can be made through a non-deterministic mechanism than through a deterministic one. All of evolution has favored structures that generate consistent output from identical inputs, despite leveraging quantum process in doing so. This shows that determinism is a good thing, even if it doesn't exist in reality.
  10. Light from that star was emitted about 8 billion years ago when the star was about 5.5 billion light years away, both time and proper distance per the cosmic (expanding) coordinate system. That star, if it currently exists (unlikely), is a burnt out husk right now and doesn't shine at all. More likely it blew up and doesn't exist at all, or is a black hole or some such. Yes, it could not have been traveling for longer than that, and it doesn't need to. Such distances use an expanding coordinate system where light speed isn't constant. But also, absolutely nothing we see in the sky now was emitted further away than about 6 billion light years away (proper distance). Sure, the galaxy/quasar/whatever may be 40 billion light years away now, but we're seeing it where it was long long ago, which was much closer. The CMB light is the oldest, and that light was emitted from less than one billion light years away. That light isn't from any 'star', but it is from the formation of the first hydrogen atoms. You're going to have to unlearn this common misconception if you want to actually understand cosmology. The big bang happened literally everywhere and was never a 'point', and there is no rushing of material from a point into 'empty space' so to speak. Getting your information from peer-reviewed sources is a far better choice. I can find a 'doc' that states just about any nonsense I wish. Your source apparently claims to have measured something outside the observable universe, something which is by definition a contradiction.
  11. Alternate theories that posit absolute velocity and time and non-isotropy of light speed (such as LET) still do not predict any empirical differences. Yes, the clock rates on Earth would change over the course of the day and year, but none of those effects would 'show' (my bold). If it were, we'd have a method to empirically determine the preferred frame and one of SR or LET would be falsified. So asking for 'evidence of this' is unreasonable. Lidal is claiming the possibility of 'evidence of this' by incorrectly calculating that the signals would reach the detector at different times in the moving lab frame. It is his calculations that are necessarily wrong, and not his assertion of absolutism, which is merely 'probably wrong' and not necessarily wrong.
  12. If your proposal is one with absolute velocity, then it is a different proposal than SR. SR has relative velocity, not absolute. With SR, all calculations should be computed relative to the chosen frame, and you chose that of the lab in the ship. Under an absolute hypothesis, all calculations must be performed relative to the absolute frame, which has not been identified. How do you know the absolute velocity of anything? It's not like the water is stationary since water moves at very different velocities from place to place and from time to time.
  13. OK, we're taking about the frame of the ship in all this. In that case, v = 0. Or v = the speed of the water going by outside, but water moving nearby has zero effect on how long it takes for light to get between a bunch of stationary emitters and detectors. Nothing is receding except the water, and the water isn't detecting any light.
  14. There seems to be no actual theory behind your idea, so no quantifiable prediction. What problem does it solve? If it doesn't solve one, and it also contractions the last century of physics, and the only experiment you envision is one that is wildly unfeasible, then there's really no point in it all, is there? I mean, I could posit that pink elephants only on some planet 100 LY away can locomote around with reactionless thrust. Why haven't we tested that idea? Or we can posit that physics is the same everywhere except for humans, all without evidence. OK, that last one is real, but funny that it would be so easy to verify, and yet no attempt is made.
  15. Well, it would match the 'usual methods' only if the clock at the 'afar' place was a similar gravitational potential relative to the clock measuring the trip around the ring. Would it now? That seems to violate your stipulation that the ring rotates at the same angular velocity as the material local to it. Maybe you imply that the Sagnac device would somehow not work correctly in this situation. Would the sum of the two signal times still be the same as the non-rotating case, which is basically πD/c ? Also, an interferometer measures femptosecond differences and would be pretty inappropriate for measuring the ~4 month difference in the signals going each way. How about we use a calendar instead? A superfluid wouldn't move with the spinning thingy, but a regular fluid would, and energy input would be needed to keep it spinning. OK, so you measure the rate at which the fluid rotates by somehow confining sound to a ring of the stuff. Clue: Ditch the plastic galaxy and instead spin a hula-hoop with fluid in it. You can put your sonic rotation sensor in there, and measure if the fluid rotates at the same rate as the hoop, slower, faster, backwards, or not at all. With light, relativity predicts numbers corresponding to no rotation of the 'medium' at all, regardless of where the ring is going or how fast it spins. You can speculate some different answer, but absent an actual experiment, what's the point except denialism? What does any of this have to do with the presence of dark matter or not. It's just more matter which is hard to see or feel directly. It isn't anything magical, and this experiment seems to be an attempt at a aether detector, not relevant in any way to the existence of dark matter or not.
  16. Totally agree that 100% is unrealistic. They're not all going to make some arbitrary goal. Fire is a bad goal because they already have it at the start, so 100% in that case. The 'after 50000' part is not specified as a deadline. The original poster allowed different times. Less is bad as well. If they get to say space in 40000 years, odds are very high that they won't be technological after 50000. There are multiple deterministic interpretations of QM, so unless you're aware of them all being falsified, we know no such thing.
  17. To quote his clarification: So it taking longer or not is irrelevant to what the OP is asking. The question is, what percentage gets there. There are of course several problems with the OP. The fact that humans need to be 'put on' each of those 1000 planets implies that the planets don't already have people on them, and if they don't already look exactly like our Earth 50000 years ago, what else is different? Is there something to eat? The unstated differences are likely to make survival impossible. So we all took the question as seemingly intended: 1000 identical copies, complete with all people at the time, solar system, etc. He asks just above: "would fire be created". Fire happens naturally, so of course. Controlled fire by humans has been around for nearly a million years, so again, of course. He gave s short list, and the implication was simply a technological outcome, enough to give us the capability of wiping ourselves out seems to be the only criteria that matters. Yes, I agree that the question wasn't posed with particular rigor. Another assumption is non-deterministic physics (or some other environmental difference on each of the 1000 worlds. If everything is identical and physics is deterministic, then not even chaos theory allows any differences at all between them. This implies the 1000 worlds are each isolated in their own identical observable universe. An assumption of non-determinism is again irrelevant since it's absence renders the OP question moot. I pointed out repeatedly that they would already have fire. I don't know what you mean by a planet having 'no history'. I mean, history is part of the state of a thing, so if the history is different, the copy isn't identical. Are the humans on these other places stripped of the knowledge and experience that the humans on actual Earth 50000 years ago had? I don't think the OP intended anything of that nature.
  18. You're talking about flat spacetime that wraps in one dimension, sort of like Pacman game. Yes, One can do SR exercises in that, eliminating say acceleration necessity, but at the cost of having a preferred orientation of axes and also a preferred frame, a violation of the first premise. So you, at Earth, see a line of Earths in opposite directions but none of the others. You can travel to one of them and compare clocks when you get there. It's the same Earth after all. Or Earth can travel to you. You already have that with normal spacetime (infinite in all directions). The difference in age between the twins when they reunite is a physical fact, meaning it isn't in any way frame dependent. And in so doing, they have a test for being stationary in the preferred frame. You seem to realize this because yes, the one-way speed of light can now be measured. Given the wrapping geometry you describe, absolutely, yes. Sounds like a Sagnac effect which yes, can be used to detect rotation. Thus rotation is absolute. I cannot follow this. Yes, there is frame dragging, but it isn't going to be measured at such low gravitational gradients such as exist at the edge of a galaxy. You have a ring, implemented with say a number of mirrors to keep the refractive index at 1. What experimental result are you expecting here? Is the ring rotating with the galaxy? Is this just a huge-scale Sagnac rotation sensor? What at all does this have to do with your wrapping SR example. As soon as you introduced a galactic mass and dark matter and such, SR no longer applies.
  19. He said "ie invented fire, internet, space flight, cars etcetc", and those things are pretty inevitable given the state 50000 years ago. It just doesn't take exactly 50000 years in each case. Maybe 20000 or 200000, but it gets there. You said Chaos theory says 'no' to the question in the OP, and it was that with which I was disagreeing. Agree about the annihilation bit, but the goals mentioned in the OP would already have been achieved in that case. We were never capable of destruction of the species before the advent of fire, flight, and cars. It could admittedly happen (and almost did) before the internet came around, so that one is far less inevitable. About the 'crucial person', that depends on what you mean by it. I mean, sure, one idiot can push the button and doom a world permanently back to near animals. There are several examples of crucial decisions that prevented things like that. But other things: No Einstein say. Somebody else would have stepped up, and in short order. The time was ripe for one or more world wars given the state of things around 1900. The first war was perhaps a political fluke, but the second seems almost mandatory given the advances in military technology at the time, making the transitions from [he who has the most soldiers] to [he who invents (and uses) stuff first] and finally to [endgame where either MAD dominates, or one power takes a permanent brutal hold] Of the 1000 worlds (from 50000 years ago, not 1900), how many manage to place a permanent self-sufficient off-planet colony somewhere? Not many. Not ours I bet.
  20. Can't totally agree here. Sure, evolution would have gone in a completely different direction in each place, but humans were pretty much as they are now by 50000 years ago. Neanderthals were well integrated into the blood lines of the southern breeds. Most of the adaptations that have been made since then revolve around dealing with new diseases as they pop up, which probably would be wildly different from world to world. Chaos theory has strange attractors, and humans going technological is a reasonable inevitability from a state that recent. So maybe 25% of those 1000 worlds? It would of course take more or less than 50000 years to do it on each. The 25% is a POOMA estimate. Probably higher now that I think of it, since the only way to stop it would probably be something that wipes us out, unlikely in that short time. The intelligence needed is already there (and is actually currently on the decline), so it seems to be a matter of time. As for an illustration of chaos theory, consider 1000 copies of Earth circa 1900. None of the humans alive today would exist on any other world after 123 years. There's no attractor for that. War(s) is inevitable, but the actual main players are not. Of those 1000 worlds, technological civilization would end before 2023 in many (majority?) of them. We've come too close too many times to make it particularly probable that we get as far as we have. Humans had controlled fire for at least 3/4 million years. 50000 is nowhere far back enough to ask if it would happen again.
  21. A for() loop is just a generalized form of a while loop. They're essentially the same thing. A do loop is different only in that it executes the body unconditionally at least once, which is inappropriate for summing up an array since it would generate erroneous results with an empty array. Examples of each, including the invalid do loop. Each assumes a line int sum = 0; at the top. for (int x = sizeof(numbers) / sizeof(int); x; x--) sum += number[x - 1]; for (int x = sizeof(numbers) / sizeof(int); x; sum += number[--x]) {} // alternate way to do it int x = sizeof(numbers) / sizeof(int); while (x--) sum += number[x]; int x = sizeof(numbers) / sizeof(int); do sum += number[--x]; while (x > 0); // wrong // An example of traversing the array forwards instead of backwards // This can be done for any of the loops above, but I find comparison with zero to be more optimized. for (int x = 0; x < sizeof(numbers) / sizeof(int); x++) sum += number[x]; Except for the inline definition of int x, this is pretty much C code, not leveraging C++ features Differences, besides the do loop being buggy, is that the for loop the way I coded it lets x drop out of scope after the loop exits. I find them all fairly equally readable since they're all essentially the same code.
  22. Escape velocity of Earth from the surface is about 11.2 km/sec, and of that energy is needed to get to the moon, which is around 1/2 km/sec from escaping Earth itself. I wouldn't call that 'well under'. Similar reasoning shows why it is so much easier to escape the solar system from Earth's orbit than it is to drop an object into the sun, let alone actually go into low orbit around it, which is currently beyond our technological limits.
  23. In the model where t 'runs' at all, no such wait is needed. That's the model with the universe being contained by time, just like you and I are. We have a start and a finish with time unaffected by our temporary presence. You mean we study things from our current event (a perspective, that in the large scale of things, is a point in space and time). Science regularly describes the universe outside this one perspective, so no, it isn't particularly mind bending to consider it as a whole, without a preferred moment or preferred location. As for time being T or t, there is not really a difference. It is what clocks measure, no matter the interpretation. There is not a 'relativistic time' that is different than that, at least not according to relativity theory. I suppose there is a difference between the two in some kind of absolutist theory (something in denial of Einstein's theory), but then T is really undetectable, not something that can be measured or demonstrated. So then T would be that which flows (instead of a dimension), and t would be what a local clock would measure, and a second of t would not correspond to any particular duration of T. I'm not sure if the absolutist theories reference T at all in a generalized theory. If they are that like us, then their stories would be similar. As for it being 'back in time', my comment referred to the relativity of simultaneity. The current time of any distant place is frame dependent, and thus whether this planet occurs prior to or after our civilization is a matter of convention. If the universe appear younger to this distant place, then by that convention, they are prior, and their point of view shows a younger universe and perhaps they are unaware of things like accelerating expansion which didn't really start happening until around when our solar system formed. OK, well that is a widely held view, but it counters relativity theory. The vast majority of people don't care about the implications of relativity theory. The opinion is not necessarily wrong. There are alternative theories that take this stance, even if it took about a century longer to generalize them into an actual theory of the cosmos instead of just a local theory. I never said it was a coordinate. I said it was a dimension in relativity theory, but only has a defined orientation if a coordinate system is specified. Relativity theory (GR, not so much SR) interprets the cosmos geometrically as 4 dimensional spacetime, with 1 temporal and three spatial dimensions.
  24. This seems to rely on the assumption that the universe is contained by time, rather than it containing time, per relativity theory. If so, then yes, the universe is a thing that didn't exist before, and somehow came to be after a countdown reaches zero from a finite or infinitely distant prior moment. Relativity theory says time is just one of 4 dimensions of spacetime, part of the universe, rather than the universe being a temporary object contained by time. This seems to suggest that time itself was created at some moment, and that at prior times, time didn't exist. That seems pretty self contradictory. Depends on how close to Earth-like you want. A rocky planet in the habital zone? Plenty of those. One with an atmosphere we can breathe? No evidence of anything like that. As for if Earth was first among the nearby planets, that seems absurd. There are plenty of older star systems. As for Earth being prior to really distant Earth-like places, per relativity of simultaneity, which one came first is a matter of the convention you choose to compare ages. You talk about defects in time, but give no clue as to what you might mean by that. No. Space is up, a direction perpendicular to north. The analogy is apt. There is no north of the pole, nor is there 'down' beyond about 6500 km. These are examples of dimensions that are bounded by the coordinate system, exactly the way time is bounded at the big bang by a cosmic coordinate system. This seems to suggest some sort of cyclic model, but they've had great trouble finding one that matches empirical evidence
  25. The event horizon is a null surface, and as such has a coordinate area, not volume. I'm not sure of the relationship between this area and its angular momentum, all else being equal. Yes, a rotating black hole (Kerr metric) contains a ring singularity. A charged black hole (Reissner-Nordsrom metric) cannot have its charge exceed its mass. So no additional charge can be added to one that is sufficiently close. The mass in unaffected by this, and the event horizon 'radius' is determined by the mass. If there is a black hole with charge equal to its mass, you get a naked singularity, which is a singular solution to the metric. So (just thinking out loud here, not an authority), if you have a super-positive charged black hole near this limit, a negatively charged particle would be more attracted to it than a neutral particle. Thus I would think there would be a second charged event horizon for the negatively charged thing that is further out than that of say a neutrino. Also, the EM potential would be so steep that it would probably rip apart (an EM 'tidal' effect) neutral things like a neutron, pulling it into charged components and accepting only the one.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.