Jump to content

Schrödinger's hat

Senior Members
  • Posts

    752
  • Joined

  • Last visited

Everything posted by Schrödinger's hat

  1. Just chiming in to, hopefully, get this discussion back on topic. I've read some of the thread and only skimmed others. I am also probably not as knowledgeable about GR as xyzt is. First note, primarily to Iggy: Aside from any reasoning mistakes you may or may not have made, you appear to be using fairly specific (but not widely used) definitions of a number of terms that become somewhat ambiguous in general relativity. If these terms are used at all, they will often be context dependent and should be defined for the purposes of a discussion if they are to be used at all. Velocity (outside of local velocity) is one such. You are also using terms like simultaneous and proper distance to refer to events relating to events that do not share an obvious inertial reference frame as if we are all on the same page (doing so is going to require further qualification). Thirdly, you appear to be privileging results derived from a specific coordinate system. The words/definitions/results derived are frequently only going to be useful within said coordinate system. These are all things that aren't really appropriate in a response to someone asking a question when they are learning SR/GR. Also presenting natural language interpretations/explanations of mathematical results that differs from the mainstream one is not really appropriate in a thread like this; regardless of whether or not the result is internally consistent with the definitions you've chosen. xyzt: If you encounter someone saying something that is likely to cause confusion or derail the thread, it can be better to suggest they take it up in private messages or another thread with you until you figure out your disagreement, or inform the mods early if this fails. The main thing I can see about Iggy's posts which is disruptive is his unstated assumptions and definitions which are a good reason to ask him to take his discussion elsewhere independently of whether his reasoning is internally consistent.
  2. Your copper wire is insulated so that the electricity travels around many times. This is what causes the strong magnetic field. A tube would allow the electricity to flow in any direction along the surface. Electriciy follows the path of least resistance, so it would make (at most) half a turn (or maybe one). For your second question, that is within the realm of possibility, but it would be a substantial engineering challenge.
  3. Just discovered www.maa.org/devlin/lockhartslament.pdf and thought it worth sharing/possibly stickying.
  4. People are more likely to be able to help you if you add a little more context. Maybe a few sentences around where you encountered it, or the name of the book/chapter/section/article/etc
  5. Imfataal, the reason you're having trouble is: Assuming his radix is an integer. [math]23_x = 2x + 3 = 2(x+1) + 1 = 2n+1 [/math] Is odd. [math]111100010_2[/math] is even. Either his radix is not an integer (rational solution to [math]2x+3 = 482[/math]) or there's a transcription error somewhere.
  6. Fun concept to play with: I find these help the imagination a bit. https://launchpad.net/4dtris/ http://www.urticator.net/maze/ Unfortunately it's quite hard to wrap your head around anything more complicated than simple rectangular type rooms/object. Also what you are viewing is a 2d projection of a 3d projection (or 3d slice) of a 4d scene. There are plenty of animations around youtube in a similar vein. Edit: Also worth noting is that these are euclidean spaces. There are other possible geometries, 4d (flat) spacetime is one (minkowskian rather than euclidean) and is dissimilar enough that you still have to do a fair bit of work to imagine it once you get the 4d euclidean thing. On a more serious note, there are some things which are kinda special to 3d. It's the lowest dimension in which you can have a hollow thing with two openings (in 2d if you try to have a tube through the middle so you can digest, you instead have two objects). Stable orbits don't work very well in other numbers of dimensions (i cannot remember if it's all other dimensions or just all low numbers) so we likely wouldn't have a universe full of swirly stuff.
  7. Also worth noting is that your muscles consume energy both when lifting the object up, and lowering it down again gently.. This is not a necessity. A machine could be built which re-absorbed some or most of the energy it used to lift the object, it's just the way muscles work (resisting motion is close to the same process as moving in the first place). They also consume some energy when exerting a constant force (ie. just holding something heavy). A table (or even electric motor if it has some kind of lock/ratchet/etc) does not have to do this.
  8. Distance squared makes a lot of sense if you stop to think about it. If you have somekind of quantity which is conserved or preserved, and you are spreading it out over a three dimensional volume, and the source is in the middle and it's coming out in a roughly spherical shape, then your quantity will be spread over a spherical area that gets bigger at the same rate a sphere gets bigger. This rate is r^2. I don't think/know whether Newton used this reasoning (unlikely as he came up with some of the maths which was later used to prove this principle and talk about the idea of conserved fields in general), but it's a very simple argument that can be stated and proved with calculus, providing the premise is assumed (spherically symmetric and conservative field). There is some level of ad-hoc reasoning and intuition in any physical theory, but this is perfectly acceptable. As long as the theory is internally consistent (mashing symbols together is fine -- if often unproductive -- as long as you follow the rules. Because following the rules of maths _is_ logical reasoning, even if you have no idea what you're doing). However, the assumptions you are working with are completely up to you. If these assumptions are small in number, differ from existing assumptions minimally, are provably consisting with all known results and -- this is the important bit -- yield novel, specific and preferably precise predictions about things that are presently unknown (often also unexpected) which then prove correct, you have a good physical theory. Both Newtonian mechanics and relativity did this. And understanding the difference between this and just pulling something from your posterior and kneading it until it fits the data is important.
  9. Your computer is made of that same set of elements. If I hit it with a sledgehammer, can you turn it back into a computer? Why not? The thing that's missing in each case is a very precise and hard to achieve arrangement of the components. Nothing to do with what they're made of. Re. Life, chemists are getting better at it. They are approaching creating an artificial cell from both a top down (get an existing cell and hijack it to produce a new cell to your design) and bottom up (starting from scratch). There is a way to go yet, but here is a release about creation of something very similar to a cell membrane (the latter): http://www.sciencedaily.com/releases/2012/01/120125132822.htm And the Craig Venter institute who do the former: http://www.jcvi.org/cms/research/projects/first-self-replicating-synthetic-bacterial-cell/overview/ Note that this is not an entirely new genome. It's more like they got the existing blueprints, cut as much stuff that they didn't need out, added a few bits, and then hijacked someone else's factory to build the thing. Neither of these count as a fully synthetic life form in many people's books, but it is getting pretty close.
  10. An interpreted language at its simplest just runs through the instructions that you give it one by one, in the order that you've written them and executes them. There's usually a 1:1 or 1:many relation between things you write and actual machine code, and the order is preserved. This tends to be quite slow. If your program isn't looking ahead and finding out what bits of data it needs where, you can spend a lot of time waiting on memory or something on the hard drive. One improvement is to look ahead a few instructions and bring stuff into memory or a cache on the CPU, but dynamic languages often don't know what they need ahead of time as the meaning of references can change or arrays can grow etc etc so the benefit of this is limited. It also passes up opportunities to switch out what you wrote for harder to read/understand stuff that is equivalent and faster. A JIT or just in time compiler presents itself as a simple interpreter, but actually does some compilation on your program before running it. Often this is to another interpreted, but easy for the machine to read (and hard for a human to read) language called a bytecode. Sometimes it is to actual machine code.
  11. You seem to have realised at least some of the following, but I shall state it for the record. Matlab is an interpreted language. I think modern versions have a JIT compiler and some kind of bytecode, but the importance of accuracy means they don't tend to get too tricky. Some things that may allow you to utilize more of your computer's resources: Use builtin, library and vectorised operations wherever you can, these tend to be written in C/java/fortran and are highly optimized: A.*B is far faster than doing it with a for loop. Unroll tight loops (any with <10 instructions) if you can -- manually make it do several operations before looping, even eliminate the loop entirely if it's a set small number of instructions. The JIT may take care of this, but it helped on the last version of matlab I was using. You may be able to refactor or vectorize your equations somewhat so that more calculations can be done before you need to use them, sometimes even adding steps/temporary variables can help if it makes it more parallelizable (and memory bandwidth isn't the issue at hand). Pre-allocate memory where you can.It can be hard to tell with a language like matlab when you are allocating memory, but if you know ahead of time how big a vector/matrix is going to be, pre-set it to a zero array of that size before using it. Whatever you do, avoid incrementally increasing its size in a loop. (again, the jit can sometimes fix this for you, but don't rely on it). Factorise out and pre-calculate invariants. Explicitly set performance intensive loops to certain lengths if you know what they will be rather than calculating as you go or using a while loop. If there is a number you are calculating repeatedly that is the same or periodic in some way, see if you can pre-calculate it. The most important part is probably the first (built in functions and vectorization) which will tend to overwhelm the effect of the others. Also your algorithmic efficiency will in turn overwhelm effects from this. There is almost always a way of making an algorithm (asymptotically) faster, or making a tradeoff between time and space which is in your favour. The only question is whether the amount of effort required on your part is worth it.
  12. Modern CPUs will tend to do about as much as you can fit in cache as quickly as you put it there for simple n or n^2 algorithms. Also if you don't exceed word length the computer probably won't know the difference. Also -- as Tiberius said -- there will be larger numbers on the constant and first order terms that change the results significantly for small n. Try it again with a few thousand bits and then double that (may have to use assembly unless you want to go shuffling things between different words -- not sure -- my C knowledge is rather weak).
  13. This is known as peasant multiplication. Both schoolbook multiplication and peasant multiplication are examples of a more general algorithm called Shift and add. Your reasoning is okay except for your definition of operation. Addition is order N as well (ie. the amount of time increases linearly with the length of the number), so you have to do K bitshifts and then one order N addition for each bitshift that passes your oddness test. N*N = O(N^2). You can muddle with the time constants a bit by using different bases and memorizing a multiplication table for smaller numbers, larger is generally better. This is basically the principle on which karatsuba/divide and conquer is based on iirc, but the 'multiplication table' is the same multiplication algorithm on two smaller numbers. Because this process of picking a smaller base is recursive and starts with a base dependant on your original number, it can have lower asymptotic complexity.
  14. The fundamental difference between operating systems is in the way they control the hardware, and the way they expose that control to various programs/bits of code. The piece of software that does all this is called the kernel. This means a binary (set of computer/kernel instructions, usually an .exe file in windows) has to be written quite differently to run on one OS vs another. Because of this (and because the underlying philosophy tends to differ), the tools (command line utilities, window manager (the thing that actually manages the graphical environment) that you use to communicate with the computer tend to be different as well. Another major difference that stems from this fundamental difference, is you wind up with different libraries (collections of code for making interacting with the kernel and hardware) available for different operating systems. Basically what this means is the main difference you'll see in use of linux is there will be windows software you cannot use. Most software and libraries designed for linux are open source. Because of this, someone usually comes along and recompiles (turns the human-readable source code into a machine readable binary) or ports (modifies the source code slightly so that it does not ask the OS to do things that it cannot so that it can then be recompiled) any linux software for windows. Lots of windows software is closed source (you cannot get the human readable source code, only the machine and OS specific binaries) so this process doesn't tend to happen as much the other way around. The other difference you'll see is tools for interacting with the OS will be different. The various graphical user interfaces available for linux (windows only has one) are very different from each other, and only a few of them are all that similar to the traditional way of doing things in windows. The commands used in the command line interface are also quite different, as well as the various command shells available (again, windows only has one -- it's that black background window that comes up when you open a run dialogue (windows key+r) and type cmd). Most/all linux distributions I've seen (a name for the linux kernel + command line tools + a big bunch of software including window managers etc etc) come with a fairly standard set of command line tools and either the bash shell (the command line thingy) or one so similar that it is very hard to tell the difference. A kernel is pretty useless without a basic set of tools like this, so we usually use the term operating system to refer to the kernel+tools/basic libraries. The precise term for the most operating systems referred to as linux is GNU/linux, ie. the GNU operating system with the linux kernel. The GNU part just refers to the behavior and toolset that accompanies the kernel. Now. On to python. Luckily for you, python is an interpreted language (not technically completely true, but close enough and I don't want to explain bytecode right now), This means there is a binary file that sits between your program and the OS that interprets a standard set of source code commands and runs your program for you. As such, you don't have to worry about all the messy details of what OS your python program is running in, you just program python and the interpreter takes care of the rest. The main advantage of linux for your case would probably be that getting programming tools is very easy on most modern distributions. It's usually as simple as typing in a command like: sudo apt-get install python or opening a graphical package manager and clicking on python Which will download it, check that you have all the necessary libraries to run it (and if not, download and install them) and install it for you. That's if your distro doesn't already come with it (many do). Python is probably a good choice for a first language. Its syntax is similar enough to most popular languages that it is easy to transition to something else. There aren't many strange quirks that you have to deal with, and you don't have to worry about low level language issues. There is some division of opinion as to whether picking a high level language (one that takes care of all the messy details like python) first is a good idea, or whether you should start with a low level language to get a better understanding of what it is you're actually telling the computer to do. If you are planning on learning a lot about programming in the long-term, and don't get discouraged easily, I would say that learning C first is a much good idea. If you're not 100% sure about it, or want to make interesting things happen quickly then I would say starting with a high level language is a good idea.
  15. It sounds like you handled that extremely well, as well as being extremely patient.
  16. Everywhere else I've read about it, this was called something along the lines of the statistical, ensemble, minimalist, or Born interpretation and the phrase 'Copenhagen interpretation' was reserved for the idea that the process of measurement altered something non-locally that was part of reality. Do you have some sources for further reading?
  17. On a local scale the event horizon of a black hole looks no different from normal space. For a big enough black hole ie. galactic centre, even a person/house sized object would have difficulty detecting the tidal forces (ie. they wouldn't be able to figure out they're near/entering the event horizon without looking out the window or doing some quite precise experiments). The tidal effect (or curvature in the region if you want to view it that way) influences the solutions to the wave equation, but it's not quite as simple as the idea of squishing/stretching the wavefunction as if it were macroscopic squishy ball in euclidean space with a force acting on it (although this may be a reasonable heuristic, I do not know enough of the details of QM in a curved background to know).
  18. My understanding is that gravity is perfectly fine, and routinely used as a backdrop for QM calculations. Either Newtonian gravity (where viewed as a potential) or as the geometry for the coordinates involved (ie. QM with a general relativistic backdrop). My only encounters with this are in a rather abstract and completely non-applied/mathematical context where I didn't fully understand the mapping between what I was doing and anything related to my personal experience in the macroscopic world. In both of these cases, the gravitational field is important for modifying the quantum system you are examining, but the quantum system is assumed to have no influence on the gravitational field (ie. the mass of the quantum system is negligible compared to the source of the gravity). The problem comes in when you try to calculate the spacetime curvature generated by your quantum system and how the quantum system reacts to the gravitational field at the same time. To simplify my (already incomplete) understanding, you need your geometry settled before you can put your spatial coordinates into your quantum equations, so if the way the coordinates interact depends on your wavefunction, you can't solve the equations anymore. MigL, ignoring the change in the gravitational field due to the interactions involved during Hawking radiation is fine as they are miniscule compared to the overall gravitational field. As such, mixing and matching in this case is going to give you an answer that is very very close to the truth (ie. QM on a GR background, a hard, but frequently dealt with situation).
  19. Elaborate on what you mean by this? Also elaborate on what Copenhagen interpretation means to you My understanding is it's a bit of an umbrella term that includes some types of objective collapse, consciousness causes collapse nonsense, and stuff involving modification of how we think about whether certain statements are true/false (ie. consistent histories).
  20. Another thing to consider is alcohol content. If you are unable to get enough calories to survive without consuming enough alcohol to kill you/make you severely ill then the beer could result in living less time. Given sufficient water, then you'd still need to consider amount of energy required to metabolise the alcohol and get it out of your system. Given this, a low alcohol content, dark beer (more calories) would probably be your best bet, but I would still think it would be questionable as to whether it will help you survive much longer than no food. 5'9" and 165lbs is starving to death? I'm 5'11" and 135lbs....does that make me already dead?
  21. The EPR paradox is a scenario that can be (and to some degree has been) tested. The results so far agree with the predictions made by the mathematics, but there are multiple valid interpretations of the mathematics. All interpretations (with no extension to quantum mechanics) require that this phenomenon be unable to transfer information. Many worlds preserves locality, but implies the existence of the many universes that give it its name. Copenhagen only entails a single universe, but requires the wavefunction collapse to be a non-local phenomenon. The situation could be viewed as roughly analogous to the gauge choice in electromagnetism. The coulomb gauge is fully compatible with observation and preserves causality, but entails fields which change non-locally and instantaneously, whereas the lorentz gauge does not. Relativity gives us a clear reason to prefer the Lorentz gauge (for metaphysical purposes at least; the Coulomb gauge is still quite useful for making predictions/modelling things), whereas there is still debate over which interpretation of quantum physics should be considered simplest (and a large camp of people who consider it a largely irrelevant question and say we should just get on with making better mathematical theories because some later discovery will likely later render whatever ontology we come up with irrelevant anyway).
  22. What you are describing is essentially the epr paradox. The key thing to remember about measurements on entangled particles is that you have a pair of events that are correlated, but have no detectable causal relationship to one another. As such, whether or not you view there to be an effect from one measurement on the other depends on your interpretation of QM. Within the Copenhagen and objective collapse interpretations, there is an instantaneous, non-local effect, but this effect cannot be used to transfer any information without presently unknown and un-detected non-linearities in the theory. Within certain variants of the many worlds, there is no such non-local occurence, the measurement is instead the operation of the measurer becoming entangled with one of the possible universes, so when they return and communicate with the other measurer, they will find a correlated result.
  23. I agree with everything Hyper had to say, but thought I would add this: If you truly think of her as a twit (rather than simply ignorant or insufficiently practised in the relevant skills), then I would say you are likely not the right person to be tutoring her. Proceeding to help her with the individual pieces of assessment rather than addressing the underlying problems (assuming they are addressable) of lack of general reading comprehension/critical reasoning skills is not doing anyone (including Holly, her class-mates, her potential future students, and her potential future employer) any good in the long term. Assuming she is capable of learning (and you have the relevant interest/skills for teaching critical reasoning) then I would question whether the pay you are receiving for the task is fair (I would say no), or even if the time is sufficient. My recommendation would be to discuss your concerns with Holly and see what transpires. If she is willing to put in large amounts of work on her own, it may not be an insurmountable obstacle.
  24. Correct they are largely a matter of ontology and metaphysics. Any interpretation which makes different, testable predictions would be considered a theory, and work would be done to check it. This is not to say people aren't trying hard to find testable predictions for these two (and other) interpretations, just that no definitive ones have been found to my knowledge.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.