Schrödinger's hat

Posts
752 
Joined

Last visited
Content Type
Profiles
Forums
Events
Posts posted by Schrödinger's hat


Hi,
I have been looking about how to do a coil gun and did one with a coil of copper wire. But what would happen if instead a copper wire I use a entire copper cylinder drilling a
hole side to side, so you have a air core coil (simple coil). I've read that the more dense is the coil, more power you will get.
My question is, how would it effect the coil gun, would it not work because the resistance that gives the cylinder is so low that could make a shortcircuit?
Another question I have is, could you do an automatic coil gun at a speed of 100m/s? When I say automatic, I say like 60 rounds per second.
Regards,
Jonathan
Your copper wire is insulated so that the electricity travels around many times. This is what causes the strong magnetic field.
A tube would allow the electricity to flow in any direction along the surface. Electriciy follows the path of least resistance, so it would make (at most) half a turn (or maybe one).
For your second question, that is within the realm of possibility, but it would be a substantial engineering challenge.
0 
Just discovered www.maa.org/devlin/lockhartslament.pdf and thought it worth sharing/possibly stickying.
3 
Hi All,
What does the following plus mean in computer theory and what words would the following example produce.
(aa+b)*
Stephen
People are more likely to be able to help you if you add a little more context.
Maybe a few sentences around where you encountered it, or the name of the book/chapter/section/article/etc
0 
Imfataal, the reason you're having trouble is:
Assuming his radix is an integer.
[math]23_x = 2x + 3 = 2(x+1) + 1 = 2n+1 [/math] Is odd.
[math]111100010_2[/math] is even.
Either his radix is not an integer (rational solution to [math]2x+3 = 482[/math]) or there's a transcription error somewhere.
1 
Fun concept to play with:
I find these help the imagination a bit.
http://www.urticator.net/maze/
Unfortunately it's quite hard to wrap your head around anything more complicated than simple rectangular type rooms/object. Also what you are viewing is a 2d projection of a 3d projection (or 3d slice) of a 4d scene.
There are plenty of animations around youtube in a similar vein.
Edit: Also worth noting is that these are euclidean spaces. There are other possible geometries, 4d (flat) spacetime is one (minkowskian rather than euclidean) and is dissimilar enough that you still have to do a fair bit of work to imagine it once you get the 4d euclidean thing.
On a more serious note, there are some things which are kinda special to 3d. It's the lowest dimension in which you can have a hollow thing with two openings (in 2d if you try to have a tube through the middle so you can digest, you instead have two objects).
Stable orbits don't work very well in other numbers of dimensions (i cannot remember if it's all other dimensions or just all low numbers) so we likely wouldn't have a universe full of swirly stuff.
0 
Also worth noting is that your muscles consume energy both when lifting the object up, and lowering it down again gently.. This is not a necessity. A machine could be built which reabsorbed some or most of the energy it used to lift the object, it's just the way muscles work (resisting motion is close to the same process as moving in the first place). They also consume some energy when exerting a constant force (ie. just holding something heavy). A table (or even electric motor if it has some kind of lock/ratchet/etc) does not have to do this.
0 
For example, Newton established the law of universal gravitation on the basis of the square of the distance not because it follows from some logic, but because it gives the correct result. Simple logic would tell that the attraction between 2 bodies would be a function of their mass and of the distance between the 2 bodies. The use of the square distance is alltogether a touch of genius and a very strange feature.
Why the distance squared? Not even twice the distance, but suddenly a measurement in meters that you have to square to get a surface in squared meters: something that you cannot measure. Exactly like we were measuring the square root of "something else".
The same strangeness occurs in e=mc^2, but since the equation gives the correct result, who cares?
Distance squared makes a lot of sense if you stop to think about it.
If you have somekind of quantity which is conserved or preserved, and you are spreading it out over a three dimensional volume, and the source is in the middle and it's coming out in a roughly spherical shape, then your quantity will be spread over a spherical area that gets bigger at the same rate a sphere gets bigger. This rate is r^2.
I don't think/know whether Newton used this reasoning (unlikely as he came up with some of the maths which was later used to prove this principle and talk about the idea of conserved fields in general), but it's a very simple argument that can be stated and proved with calculus, providing the premise is assumed (spherically symmetric and conservative field).
There is some level of adhoc reasoning and intuition in any physical theory, but this is perfectly acceptable. As long as the theory is internally consistent (mashing symbols together is fine  if often unproductive  as long as you follow the rules. Because following the rules of maths _is_ logical reasoning, even if you have no idea what you're doing).
However, the assumptions you are working with are completely up to you. If these assumptions are small in number, differ from existing assumptions minimally, are provably consisting with all known results and  this is the important bit  yield novel, specific and preferably precise predictions about things that are presently unknown (often also unexpected) which then prove correct, you have a good physical theory.
Both Newtonian mechanics and relativity did this. And understanding the difference between this and just pulling something from your posterior and kneading it until it fits the data is important.
2 
Both things they have in common the fact that they exist and they are in front of us, and I was thinking that when a living organism turn into elements when it dies that doesn't mean that a living organism is a compound of those elements, therefore I suppose there are an infinite number of elements and we don't even know their existence....(just saying)
Your computer is made of that same set of elements.
If I hit it with a sledgehammer, can you turn it back into a computer? Why not?
The thing that's missing in each case is a very precise and hard to achieve arrangement of the components. Nothing to do with what they're made of.
Re. Life, chemists are getting better at it. They are approaching creating an artificial cell from both a top down (get an existing cell and hijack it to produce a new cell to your design) and bottom up (starting from scratch).
There is a way to go yet, but here is a release about creation of something very similar to a cell membrane (the latter):
http://www.sciencedaily.com/releases/2012/01/120125132822.htm
And the Craig Venter institute who do the former:
http://www.jcvi.org/cms/research/projects/firstselfreplicatingsyntheticbacterialcell/overview/
Note that this is not an entirely new genome. It's more like they got the existing blueprints, cut as much stuff that they didn't need out, added a few bits, and then hijacked someone else's factory to build the thing.
Neither of these count as a fully synthetic life form in many people's books, but it is getting pretty close.
0 
0k. Now i'm beginning to understand a little bit more about how to write algorithms in Matlab, but i still gonna need more reading.
Schröedinger's hat, what do you mean by saying "Matlab is an interpreted lenguage"? I was familiar with some fortran programming back in the university days, so i understand what the compiler does but, what is a JIT compiler?
An interpreted language at its simplest just runs through the instructions that you give it one by one, in the order that you've written them and executes them. There's usually a 1:1 or 1:many relation between things you write and actual machine code, and the order is preserved.
This tends to be quite slow. If your program isn't looking ahead and finding out what bits of data it needs where, you can spend a lot of time waiting on memory or something on the hard drive.
One improvement is to look ahead a few instructions and bring stuff into memory or a cache on the CPU, but dynamic languages often don't know what they need ahead of time as the meaning of references can change or arrays can grow etc etc so the benefit of this is limited.
It also passes up opportunities to switch out what you wrote for harder to read/understand stuff that is equivalent and faster.
A JIT or just in time compiler presents itself as a simple interpreter, but actually does some compilation on your program before running it. Often this is to another interpreted, but easy for the machine to read (and hard for a human to read) language called a bytecode. Sometimes it is to actual machine code.
0 
You seem to have realised at least some of the following, but I shall state it for the record.
Matlab is an interpreted language. I think modern versions have a JIT compiler and some kind of bytecode, but the importance of accuracy means they don't tend to get too tricky.
Some things that may allow you to utilize more of your computer's resources:
Use builtin, library and vectorised operations wherever you can, these tend to be written in C/java/fortran and are highly optimized:
A.*B
is far faster than doing it with a for loop.
Unroll tight loops (any with <10 instructions) if you can  manually make it do several operations before looping, even eliminate the loop entirely if it's a set small number of instructions. The JIT may take care of this, but it helped on the last version of matlab I was using. You may be able to refactor or vectorize your equations somewhat so that more calculations can be done before you need to use them, sometimes even adding steps/temporary variables can help if it makes it more parallelizable (and memory bandwidth isn't the issue at hand).
Preallocate memory where you can.It can be hard to tell with a language like matlab when you are allocating memory, but if you know ahead of time how big a vector/matrix is going to be, preset it to a zero array of that size before using it. Whatever you do, avoid incrementally increasing its size in a loop. (again, the jit can sometimes fix this for you, but don't rely on it).
Factorise out and precalculate invariants. Explicitly set performance intensive loops to certain lengths if you know what they will be rather than calculating as you go or using a while loop. If there is a number you are calculating repeatedly that is the same or periodic in some way, see if you can precalculate it.
The most important part is probably the first (built in functions and vectorization) which will tend to overwhelm the effect of the others. Also your algorithmic efficiency will in turn overwhelm effects from this. There is almost always a way of making an algorithm (asymptotically) faster, or making a tradeoff between time and space which is in your favour. The only question is whether the amount of effort required on your part is worth it.
1 
hmm, fair enough. let's double it 3 more times. with 56 and 48 bits, 568 operations, and i still get .02 seconds with 112 and 96 bits, 1178 operations, and i still get .02 seconds. finally, with 224 and 192 bits, 2362 operations, and i still get .02 seconds. (from www.ideone.com ) <BR>edit: i tried timing it with my own cpu rather than a website, even at 224, 192 bits, it was taking 0 seconds!<BR>now that is a fast multiplication algorithm!<BR>
Modern CPUs will tend to do about as much as you can fit in cache as quickly as you put it there for simple n or n^2 algorithms. Also if you don't exceed word length the computer probably won't know the difference.
Also  as Tiberius said  there will be larger numbers on the constant and first order terms that change the results significantly for small n.
Try it again with a few thousand bits and then double that (may have to use assembly unless you want to go shuffling things between different words  not sure  my C knowledge is rather weak).
0 
I was looking up the fastest multiplication algorithms on Wikipedia. Karatsuba showed up [O(N^{1.585})], Strassen showed up[O(N log N log log N)], all with respectable big O scores, but a friend recently told me of an algorithm that seems to work in linear time (wrt to the number of digits in the numbers).
I can't seem to find it on Google, so I'll just describe it here with an example:
Suppose you have two numbers 141 and 45 and we want to multiply them. We keep dividing the first number by 2 until it becomes 1 (integer division), and keep multiplying the 2nd number the same number of times.
141 45
70 90
35 180
17 360
8 720
4 1440
2 2880
1 5760
Now, we look at the LHS and add up all the terms that correspond to ODD terms on the left: 45+180+360+5760=6345 which is 141*45
Now as far as I could tell, there is a linear relationship between the number of iterations of division before you get down to 1 and the number of digits. (You end up adding 3 or 4 iterations per digit increase because multiplying by 10 is multiplying by 1010 in binary).
The relationship is something like K=3.322N +0.5, where N is the number of digits and K is the number of iterations. ( I wasn't too sure of a mathematical approach to the relation between K and N, so I plotted K vs N for N=1 to 600 and found that a linear relationship existed)
So in the worst case, for an N digit number, one has to do K multiplications by 2 and K divisions by 2. During the process, one has to do at max (if all LHS terms are odd) K additions as well. [Multiplying/Dividing a number by two is a constant time operation right ? Just a simple bitshift ?] So shouldn't the overall runtime be O(2K+K)
=O(9.966N+1.5) = O(N) ?
Am I missing something here ? Can I consider division by 2 as a constant time operation as well ? I assumed it was because bitshifting seems like something that would be classified as constant time, just as adding zero's in Karatsuba's algorithm was considered a constant time operation. Is there something wrong with the relationship between K and N ? Will it deviate for higher values on N ? (I couldn't really go beyond 600, nor could I derive a formal mathematical relationship).
This is known as peasant multiplication. Both schoolbook multiplication and peasant multiplication are examples of a more general algorithm
called Shift and add.
Your reasoning is okay except for your definition of operation.
Addition is order N as well (ie. the amount of time increases linearly with the length of the number), so you have to do K bitshifts and then one order N addition for each bitshift that passes your oddness test. N*N = O(N^2).
You can muddle with the time constants a bit by using different bases and memorizing a multiplication table for smaller numbers, larger is generally better.
This is basically the principle on which karatsuba/divide and conquer is based on iirc, but the 'multiplication table' is the same multiplication algorithm on two smaller numbers. Because this process of picking a smaller base is recursive and starts with a base dependant on your original number, it can have lower asymptotic complexity.
1 
I am brand new to programming and I have no prior knowledge of programming or software other than the usual computer usage that I conduct on a day to day basis (on windows).
I wondered what the difference is between Windows and Linux, why there is a difference, and what they are best used for in a programming sense?
(I'm also considering learning Python as my first programming language)
Any help would be greatly appreciated.
The fundamental difference between operating systems is in the way they control the hardware, and the way they expose that control to various programs/bits of code. The piece of software that does all this is called the kernel.
This means a binary (set of computer/kernel instructions, usually an .exe file in windows) has to be written quite differently to run on one OS vs another.
Because of this (and because the underlying philosophy tends to differ), the tools (command line utilities, window manager (the thing that actually manages the graphical environment) that you use to communicate with the computer tend to be different as well.
Another major difference that stems from this fundamental difference, is you wind up with different libraries (collections of code for making interacting with the kernel and hardware) available for different operating systems.
Basically what this means is the main difference you'll see in use of linux is there will be windows software you cannot use.
Most software and libraries designed for linux are open source. Because of this, someone usually comes along and recompiles (turns the humanreadable source code into a machine readable binary) or ports (modifies the source code slightly so that it does not ask the OS to do things that it cannot so that it can then be recompiled) any linux software for windows.
Lots of windows software is closed source (you cannot get the human readable source code, only the machine and OS specific binaries) so this process doesn't tend to happen as much the other way around.
The other difference you'll see is tools for interacting with the OS will be different. The various graphical user interfaces available for linux (windows only has one) are very different from each other, and only a few of them are all that similar to the traditional way of doing things in windows.
The commands used in the command line interface are also quite different, as well as the various command shells available (again, windows only has one  it's that black background window that comes up when you open a run dialogue (windows key+r) and type cmd).
Most/all linux distributions I've seen (a name for the linux kernel + command line tools + a big bunch of software including window managers etc etc) come with a fairly standard set of command line tools and either the bash shell (the command line thingy) or one so similar that it is very hard to tell the difference.
A kernel is pretty useless without a basic set of tools like this, so we usually use the term operating system to refer to the kernel+tools/basic libraries.
The precise term for the most operating systems referred to as linux is GNU/linux, ie. the GNU operating system with the linux kernel. The GNU part just refers to the behavior and toolset that accompanies the kernel.
Now.
On to python.
Luckily for you, python is an interpreted language (not technically completely true, but close enough and I don't want to explain bytecode right now), This means there is a binary file that sits between your program and the OS that interprets a standard set of source code commands and runs your program for you.
As such, you don't have to worry about all the messy details of what OS your python program is running in, you just program python and the interpreter takes care of the rest.
The main advantage of linux for your case would probably be that getting programming tools is very easy on most modern distributions.
It's usually as simple as typing in a command like:
sudo aptget install python
or opening a graphical package manager and clicking on python
Which will download it, check that you have all the necessary libraries to run it (and if not, download and install them) and install it for you.
That's if your distro doesn't already come with it (many do).
Python is probably a good choice for a first language. Its syntax is similar enough to most popular languages that it is easy to transition to something else. There aren't many strange quirks that you have to deal with, and you don't have to worry about low level language issues.
There is some division of opinion as to whether picking a high level language (one that takes care of all the messy details like python) first is a good idea, or whether you should start with a low level language to get a better understanding of what it is you're actually telling the computer to do.
If you are planning on learning a lot about programming in the longterm, and don't get discouraged easily, I would say that learning C first is a much good idea.
If you're not 100% sure about it, or want to make interesting things happen quickly then I would say starting with a high level language is a good idea.
4 
It sounds like you handled that extremely well, as well as being extremely patient.
0 
The Copenhagen interpretation describes what we can expect in terms of an experimental outcome. Whether that means anything about a larger "reality": The Copenhagen interpretation doesn't care. The CI takes an instrumentalist (aka nonrealistic, as opposed to realistic or antirealistic) point of view. Another way to say it: "Shut up and calculate".
 "Shut up and calculate."
 There are no hidden variables.
 The ket isn't quite real.
 The collapse of the wavefunction is what counts.
 "Does the moon exist if we don't look at it" is a nonsense question.
Everywhere else I've read about it, this was called something along the lines of the statistical, ensemble, minimalist, or Born interpretation and the phrase 'Copenhagen interpretation' was reserved for the idea that the process of measurement altered something nonlocally that was part of reality. Do you have some sources for further reading?
0 
So in other words, your saying because the fabric of space is distorted, the probability will tend to follow that curvature? What if half of the particle is in the black hole and the other half is outside of it? I guess maybe the half that's inside would like like a cone being stretched towards the event horizon, and the outer half would still look relatively like a sphere?
On a local scale the event horizon of a black hole looks no different from normal space.
For a big enough black hole ie. galactic centre, even a person/house sized object would have difficulty detecting the tidal forces (ie. they wouldn't be able to figure out they're near/entering the event horizon without looking out the window or doing some quite precise experiments).
The tidal effect (or curvature in the region if you want to view it that way) influences the solutions to the wave equation, but it's not quite as simple as the idea of squishing/stretching the wavefunction as if it were macroscopic squishy ball in euclidean space with a force acting on it (although this may be a reasonable heuristic, I do not know enough of the details of QM in a curved background to know).
0 
Does gravity make any difference to the probability wave?
Or is that more mix n match?
My understanding is that gravity is perfectly fine, and routinely used as a backdrop for QM calculations.
Either Newtonian gravity (where viewed as a potential) or as the geometry for the coordinates involved (ie. QM with a general relativistic backdrop). My only encounters with this are in a rather abstract and completely nonapplied/mathematical context where I didn't fully understand the mapping between what I was doing and anything related to my personal experience in the macroscopic world.
In both of these cases, the gravitational field is important for modifying the quantum system you are examining, but the quantum system is assumed to have no influence on the gravitational field (ie. the mass of the quantum system is negligible compared to the source of the gravity).
The problem comes in when you try to calculate the spacetime curvature generated by your quantum system and how the quantum system reacts to the gravitational field at the same time.
To simplify my (already incomplete) understanding, you need your geometry settled before you can put your spatial coordinates into your quantum equations, so if the way the coordinates interact depends on your wavefunction, you can't solve the equations anymore.
MigL, ignoring the change in the gravitational field due to the interactions involved during Hawking radiation is fine as they are miniscule compared to the overall gravitational field. As such, mixing and matching in this case is going to give you an answer that is very very close to the truth (ie. QM on a GR background, a hard, but frequently dealt with situation).
1 
and is nonrealistic.
Elaborate on what you mean by this?
Also elaborate on what Copenhagen interpretation means to you
My understanding is it's a bit of an umbrella term that includes some types of objective collapse, consciousness causes collapse nonsense, and stuff involving modification of how we think about whether certain statements are true/false (ie. consistent histories).
1 
After much research online I have seen more guesses than anything about this question so i thought this being a science forum we would put science behind it. I Guess we should give the question some parameters so the individual is a 30 year old male 5 ft 9 165 lbs in good health. The amount of beer can be adjusted if it does any good but each beer contains 153 calories. 5 grams carbohydrates other contents are irrelevant.
Another thing to consider is alcohol content.
If you are unable to get enough calories to survive without consuming enough alcohol to kill you/make you severely ill then the beer could result in living less time.
Given sufficient water, then you'd still need to consider amount of energy required to metabolise the alcohol and get it out of your system.
Given this, a low alcohol content, dark beer (more calories) would probably be your best bet, but I would still think it would be questionable as to whether it will help you survive much longer than no food.
You need to include water intake in your 'parameters.' Also, how is 5'9" 165lbs, in good health? Sounds kind of starved to death . . . . poor fella'.
** must be drinking too much beer
5'9" and 165lbs is starving to death? I'm 5'11" and 135lbs....does that make me already dead?
1 
I don't think this is a "its whatever answer you want" type of question though, I think this is a specific scenario that could even be tested one day.
The EPR paradox is a scenario that can be (and to some degree has been) tested. The results so far agree with the predictions made by the mathematics, but there are multiple valid interpretations of the mathematics.
All interpretations (with no extension to quantum mechanics) require that this phenomenon be unable to transfer information. Many worlds preserves locality, but implies the existence of the many universes that give it its name.
Copenhagen only entails a single universe, but requires the wavefunction collapse to be a nonlocal phenomenon.
The situation could be viewed as roughly analogous to the gauge choice in electromagnetism.
The coulomb gauge is fully compatible with observation and preserves causality, but entails fields which change nonlocally and instantaneously, whereas the lorentz gauge does not.
Relativity gives us a clear reason to prefer the Lorentz gauge (for metaphysical purposes at least; the Coulomb gauge is still quite useful for making predictions/modelling things), whereas there is still debate over which interpretation of quantum physics should be considered simplest (and a large camp of people who consider it a largely irrelevant question and say we should just get on with making better mathematical theories because some later discovery will likely later render whatever ontology we come up with irrelevant anyway).
2 
If I have two particles and I separate them by many light years, one on Earth and one near a black hole, since their states would be instantly determined by any measurement via their inherent mathematics, wouldn't I instantaneously break the entanglement even in far different time dilations to any frame of reference (granted that I am not considered the actual light that would need to travel to a measuring device)?
What you are describing is essentially the epr paradox.
The key thing to remember about measurements on entangled particles is that you have a pair of events that are correlated, but have no detectable causal relationship to one another.
As such, whether or not you view there to be an effect from one measurement on the other depends on your interpretation of QM.
Within the Copenhagen and objective collapse interpretations, there is an instantaneous, nonlocal effect, but this effect cannot be used to transfer any information without presently unknown and undetected nonlinearities in the theory.
Within certain variants of the many worlds, there is no such nonlocal occurence, the measurement is instead the operation of the measurer becoming entangled with one of the possible universes, so when they return and communicate with the other measurer, they will find a correlated result.
1 
I agree with everything Hyper had to say, but thought I would add this:
If you truly think of her as a twit (rather than simply ignorant or insufficiently practised in the relevant skills), then I would say you are likely not the right person to be tutoring her. Proceeding to help her with the individual pieces of assessment rather than addressing the underlying problems (assuming they are addressable) of lack of general reading comprehension/critical reasoning skills is not doing anyone (including Holly, her classmates, her potential future students, and her potential future employer) any good in the long term.
Assuming she is capable of learning (and you have the relevant interest/skills for teaching critical reasoning) then I would question whether the pay you are receiving for the task is fair (I would say no), or even if the time is sufficient.
My recommendation would be to discuss your concerns with Holly and see what transpires. If she is willing to put in large amounts of work on her own, it may not be an insurmountable obstacle.
1 
Then, I assume, the answer is that there is no literal proof for this interpretation.
Correct they are largely a matter of ontology and metaphysics. Any interpretation which makes different, testable predictions would be considered a theory, and work would be done to check it. This is not to say people aren't trying hard to find testable predictions for these two (and other) interpretations, just that no definitive ones have been found to my knowledge.
1 
Heh, that was a decent attempt, timo. I'll have a go as well.
You'll get many different interpretations of QM if you ask many different people. A common interpretation of the phrase manyworlds is basically what timo described, privileging consciousness as some kind of magic wand which splits universes. This is similar in many ways to how the Copenhagen interpretation is/was often presented (with the experimenter/consciousness as some kind of magic wavefunction collapsing wand).
The more coherent versions of each are as follows:
The Copenhagen interpretation is a bit ambiguous to the objective reality of the wavefunction. It deals with it as a model or abstraction of reality, which may or may not be true. Measurement is a process which is very difficult to define (and as far as I am aware, left somewhat undefined except insofar as what the result will be), but easy to point to (it's the thing that guy in the lab did with those results). It collapses wavefunctions down into single possibilities with measurable results. The main objection to it is that the idea of measurement and quantum collapse is a poorly understood, complicated process that needs to be inserted into the theory, making it more complicated.
There is room for some variations of interpretation within the Copenhagen interpretation, from completely denying the wavefunction as a way in which reality behaves and instead considering it a model with predictive power over large numbers of trials of an unknown but complicated process to ones where wavefunction collapse is an objective phenomenon.
The many worlds interpretation comes from simply taking the wavefunction at face value. We observe that particles take on superpositions when interacting with other particles in superpositions. You insert some (often vague) argument about macroscopic objects being one of the distinguishable states of this superposition. Measurement and 'universe splitting' is simply the process of becoming entangled with a specific state of another superposition. There is no privileged system which can do the measuring.
The main objection to this is, although it makes the theory simpler, it entails an absolutely preposterous number of universes (a split for every interaction that's ever resulted in a distinguishable difference).
Re. the dice example, depending on how far back you go (maybe after it's thrown), that may just be a deterministic phenomenon insofar as there was nothing in superposition large enough to change the result of the roll
Ie. there'd be universes where a photon bounced of the die in one direction, and universes where it bounced in another direction, but it might be showing 3 in all of them (if you go back to where some small change could alter the outcome, you would have splits with different rolls).
Re. The quantum suicide, the experimenter would be just as dead in (n1)/n universes as if they had committed suicide by any other means, so strikes me as an absurd way of going about testing MW.
TL;DR There are lots of interpretations of QM, most sane variants of Copenhagen and MW have adherents within serious scientific circles (or at least people who will state that they use x interpretation when pressed), but Copenhagen is a lot more common. Most physicists just take a shut up and calculate approach as the maths works extremely well and doesn't care how you interpret it. Anyone who takes one or the other too seriously can probably be safely dismissed, esp. if they insert consciousness as a magic measuringwand.
2
Does Gravity Slow Light Moving Vertically?
in Relativity
Posted
Just chiming in to, hopefully, get this discussion back on topic. I've read some of the thread and only skimmed others. I am also probably not as knowledgeable about GR as xyzt is.
First note, primarily to Iggy:
Aside from any reasoning mistakes you may or may not have made, you appear to be using fairly specific (but not widely used) definitions of a number of terms that become somewhat ambiguous in general relativity. If these terms are used at all, they will often be context dependent and should be defined for the purposes of a discussion if they are to be used at all. Velocity (outside of local velocity) is one such.
You are also using terms like simultaneous and proper distance to refer to events relating to events that do not share an obvious inertial reference frame as if we are all on the same page (doing so is going to require further qualification).
Thirdly, you appear to be privileging results derived from a specific coordinate system. The words/definitions/results derived are frequently only going to be useful within said coordinate system.
These are all things that aren't really appropriate in a response to someone asking a question when they are learning SR/GR. Also presenting natural language interpretations/explanations of mathematical results that differs from the mainstream one is not really appropriate in a thread like this; regardless of whether or not the result is internally consistent with the definitions you've chosen.
xyzt:
If you encounter someone saying something that is likely to cause confusion or derail the thread, it can be better to suggest they take it up in private messages or another thread with you until you figure out your disagreement, or inform the mods early if this fails. The main thing I can see about Iggy's posts which is disruptive is his unstated assumptions and definitions which are a good reason to ask him to take his discussion elsewhere independently of whether his reasoning is internally consistent.