-
Posts
188 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Posts posted by Ndi
-
-
[1]Err, I think "Pi in base Pi is 10. " is misleading. Non- integer bases run into problems.
In base 10 I can use all the digits 0 to 9; there's no need for anything bigger than 9. In binary I only have 1 and 0.
What digits can I use in base 0.5 ?
[2]And, at the risk of arguing against myself, pi is random in a rather obscure sense. Given that a particular digit is, for example, 3 does not give any indication of what the next digit is. Similarly the sequence of digits 25356 is just as likely to be followed by any digit as any other. It is, in fact, used in random number generation and, as such, it was tested for randomness (of this type) a long time ago to a lot of digits. It passed so there's not going to be anything interesting in the first couple of hundred digits.
<url>
Yes, you run into problems because of the integer base way of thinking. Rules are simple. Start counting, when you reach base increment next digit and reset first. Rinse and repeat.
I can have numbers A B and C, in base D. A, B, C, AA, AB, AC, BA, etc. You can have non-integer numbers in a base, so in base 0.5 I can count in non-ints. 0.1, 0.2, 0.3, 0.4, 0.499999, 0.1|0. This is a notation problem, not a numeration issue. We write numbers with no spacing so itis human-readable with ints alone. Yet we have this issue for ints as well. In hex, we use A B C D E and F for that reason alone. 0xFF is actually 1515 but that raises confusion. So to keep glued-figure-format, we use letters. At the risk of repeating myself, this is not a number/math issue, but a human readable format issue. It has nothing to do with impossibility. We do run into problems, yes. This does not disprove the truth.
IMO, "What digits can I use in base 0.5 ?" is misleading because you try to apply int numbering to a non-int base. We can use powers that are non-round, roots with non-round numbers too. Just because we devised ways to cast them to an int to do on paper is a different story. Computers don't use int casting or paper tricks to do float divisions.
As for [2], I still fail to see your reasoning. Nobody can tell you what follows 3 because you have insufficient data. 2 might be after 3. Or 4 or 72. Now, what follows the first 3 in PI, well, that's not random. It's 1. then 4,1,5,9,2,6 etc. NOT RANDOM. Covering your eyes doesn't make it random.
What comes after 3 is an incomplete question. What comes after 314 is better. 3141592 is even better. With enough info one can determine where you are in the string and start predicting with 100% accuracy.
This is why real crypto functions require you to move mouse, bash keyboard, hit your head against the power supply and so on. Random() has a list of "random" numbers inside. If you use Random() once, you get away with it. Keep using it, and you give enough info to be located in the random string and you can predict randomness because it's not random.
Not having enough info doesn't make something random. Coin toss is random. You can toss a coin a million times and you still can't tell what the next toss will be. It thought that computers and maths can't really generate random numbers because they are precise by nature. This is why hi-security systems turn to physics for randomness.
What you are saying in your post is limited to "next digit" or next "few hundred" digits. Just because it's non-intuitive in human terms doesn't make it random in any way. It has a rule, it's well defined, thus predictable at any point. It's so predictable that if you give me an index I can give you a digit (I only have 32M so don't overdo it ).
Perhaps it's the concept of random versus humanly-perceived random that's in question here?
0 -
(and, BTW, has anyone noticed that on CSI they are always lucky and the perpetrator always leaves a single hair behind? He's never shedding like a mangy cat)
a) sometimes they do
b) "why is the thing you miss in the last place you are looking". Because once you find it you stop looking. Besides, they dust the entire area, roll all the hairs, and do a DNA. IIRC, they can do an analysis on DNA even if 2 samples are mixed. So in the end, it's "one hair" even if it's a whole wig.
c) If you found 3 hairs at the crime scene, you only use one for DNA, keep the others for physical comparison, future reference, exhibit A, etc.
d) if they cut the high-speed chase to show me that a lab assistant found a SECOND hair on the scene I'd be pretty upset.
0 -
There are undeniable differences between races - for example, black people have genes that make their skin black and we have the white skin genes. Genetic profiling points to race, sex, physical characteristics, physical abilities and *may* point to mental abilities.
Whatever is common to black people and not an external factor is most likely in the genes. To be able to say that black people are inferior in IQ you'd have to fully profile genes that give IQ, understand them, then run tests on vast masses before you can make such an incendiary claim.
So while it's true we're different and there are trends, there's a long way between skin color and superiority. Too many factors need taken into consideration, the best is that we still don't have a way to measure intelligence. IQ only points to few abilities and are dependent on culture and school systems.
Basically "different" is true. We don't even know what "superior" means.
Edit: P.S.
One race is bound to be slightly higher when measured, if we ever have the means to measure. Should that happen, it's a scientific truth, not racism. Racism is race hate, not acknowledging differences. Nobody cried race when the calculated average per-country penis length.
The line is thin, though. So is formulating that line. Even if average white IQ proves to be higher, that proves [/i]nothing[/i] about any of the individuals. I can already picture the worst examples of the white race grinning with both their teeth thinking they are smarter than the black people.
0 -
Em, should work via IESHWIZ/ desktop.ini + folder.htt editing?
Note that desktop.ini is insufficient for customizations, you also need folder attributes to be present.
0 -
Humidity causes cardboard to go soft in time until it couldn't support it's weight and rolled over. (yet another guess)
0 -
Sorry to be hacking the thread (hope I'm not really off) -but- Has anyone shown time to actually exist? I always thought of it as a human measurement, a perceived notion when the need to express something that has already happened. It's vital to our communication so it's almost impossible to express ideas without actually using time in any way, but - for the heck of it, does it REALLY exist? (I used three time references at a minimum in this paragraph, plus another few implied presents or references. Another in the previous sentence)
Is there any proof of a "timeline" in any way? Why the heck can't I kill my own grandfather? Objects interact. What's keeping me from displacing myself to the full coordinates of my grandfather at uniformly noted age of 25 and shoot him? Nothing (minus the physical impossibility ). We try to re-fit the events in a standard way of thinking. I shoot him, fine. I don't die or paradox because there is no timeline to rebuild.
If I kick a ball from A to B, the ball moves. Right, where do we set NOW as? Current time. Can I set NOW as when the ball is half way? Sure. Can it be when the ball is to its destination? Sure. So if I can set an arbitrary NOW, I'd like to set it before I kicked the ball. I can do that. Does that mean I kicked the ball in the future? No, actually, since future is arbitrary. Does that mean that I can really NOT kick the ball? Seems unlikely that if I set NOW to T0 I can "not kick the ball" but if I set it to T1 I must/already have.
We insist that time is just another coordinate, yet if we move along side it we insist that everything must be rebuild to fit. If a coin is on a paper (in 2D) and I move the coin through the 3rd dimension over some other place, does that mean that I have to rebuild the whole 2D universe to make everything fit? No. Why? Because we see a third coordinate as it is, another coordinate. Moving the coin from point A to B and then from B to A does *not* make the coin new again, nor does it imply that the rest of the paper rebuilds. If the coin leaves a smudge as it moves and we lift it and reset it at some point it will start a new smudge. It can even re-trace its old smudge, but it does NOT rebuild the smudge to fit its new location. Heck, I can even set it before it first appeared. It does not implode the universe. Then why would shooting grandpa not allow me to be born? 3 out of 4 coordinates have no lines and no reverse causality yet the 4th has? There are laws for two smudges intersecting, so there must be for two me-s as well. Maybe me walking into me has a more dense matter, just as the smudge thickens when double-walked.
Even as I write this I'm trying to imagine my grandpa duplicating as I shoot, so he will go on and have my father. It's hard to be clear when so tightly bound to time causality.
I'm not sure I'm being clear. I'm not sure how to make it clearer either, it's just that we consider time as a rolling drum, always rolling. Since we defined it that way, no wonder we come across issues trying to jump on the roll. It's just like we thought we are actually moving across the X axis at 1 meter per second, and the whole universe slides continuously. How do we get back 10 Km? We can't because there is no line, no scroll, no drum. And if we do believe we are on a constant slide and everything we do has an X, and Y, a Z and an index in meters, is it far fetched for us to also believe that if we jumped in the roll we'd have to do the same again? Would we have the misconception that if we unrolled the drum to before we were born we would not be there? Most likely.
VCR tape rewinding gives people all kinds of ideas.
0 -
2. There is reason to believe that the randomness of pi expressed in any base (i.e., 3.14159…in base 10)
a) Pi is not random.
b) If it had ANY "randomness" it would be used for generating random numbers. It's not because it's predictable
c) Pi in base Pi is 10. Throwing "any base" is unlikely to be helpful, especially since you don't seem to see where the "magic" of pi is coming from. Pi is what happens when you try to measure things that were not in the original blueprints.
A meter is a meter because we said so. A foot is the foot size of some king. 1 meter = 3.2808399 feet. Does that help me calculate the radius of the universe? no. What it does is tell me that if I want to measure king feet I'd better switch to a more suitable system. We work in round numbers that we arbitrarily chose, in a base that is arbitrarily chosen. It's bound to be unsuitable in places, such as square roots.
If you look at Pi computation/approximation algorithms you'll see it all boils down to "bad" operations for our usual bases. Try a Google search on Gauss-Legendre or Borwein's 4-th order convergent algorithm.
As for 1) and 3) in your post, I'm at a loss. As pointed out in other posts, the "randomness" of such numbers is humanly perceived only because of base limitations. They are constant through time and specific to our base and system. Switch to radians and a circle becomes 2 base Pi. Half a circle becomes 1 base Pi. Conversion between bases is 3.1415 just as conversion between feet and meters is 3.2808.
Just because you can't measure a wheel with a stick doesn't make the wheel holy. It makes the stick bad.
0 -
There's been an extensive thread about Windows and DRM with detailed info and some debate about DRM and its usage. DRM will not be able to pose any threat to any well-informed user or programmer as it needs a software side to run. Whatever is software can be altered, cracked or otherwise tricked.
There were plans to have the DVD drive linked to the monitor and the monitor would lower quality when a non-DRM media is used. The idea was booed for obvious reasons and it failed since it could rob legitimate users of their content and THEY can sue.
We have a saying in Romania - it may and should be universal: "Locks are for honest people". Thieves get in anyway. DRM is to keep Joe Average from inserting any DVD anywhere and getting an illegal picture. When a DVD is released, the content is ripped (by bad people) and transcoded into a format that does not have any flags (AVI, MP3, etc). Those files you get as "piracy".
The only thing DRM does (and will do for a long time) is slow down Joe Average using Nero to "disk copy" a DVD movie. It can't stop Joe Average from getting an MP3 from a site/program and pressing PLAY. Mainly because it can't know blindly when an MP3 is being played by a 3rd party software.
If you are really interested in DRM, use the search function to get a hold of the original DRM thread (I posted too, this could narrow the search).
0 -
Cars get cooled slightly via fans because they employ a metal chassis. The fans at full speed compress the air from the outside, raising temperature by a degree or two. This air is rubbing against the chassis on its way to the cabin, losing the excess heat to the car body (which is now cooler, being vented in motion). When it reaches the ventilation grids, it expands, getting cooler, thus having an actual temperature difference relative to the body of the car and the air going over the chassis.
This is not significant for fans, I just inserted the fans to be into the question, actually it's been implemented in older cars and uses the pressure of the air in front of the car ramming against the ventilation intake. There is a noticeable difference in temperature when going at highway speeds, when pressure is higher and the chassis is being ventilated.
Just background info.
0 -
I fail to see it as a good idea.
Like I said before, forcing an SUV to a slightly higher price is not the way. People who can afford to buy AND KEEP an SUV will still do so. Additionally, second hand market will be boosted to compensate for those who can't buy an SUV any more.
At least taxation brought more money to the government. At least Euro standards forced cleaner cars into roads or no sale at all. Pushing market by 5% is not a solution. Nor do I see it a an option. Lower consumption and lower emissions are not the same. When in doubt, cast your eye towards people who have cars that are cleaner and more efficient (like Europe) and learn from their mistakes.
Forcing Porsche to have the same emissions as a 1.1l 5l/100Km car boosted engine technology because they had no place to go. Those who can buy Porsche or Mercedes can afford the gas if it grows to 10E/liter. It's not those who need to be toned down, it's the cheap car with the small engine that would do anything to be efficient and cheap in order to sell and maintaining those standards.
They risked having a motorized panic situation as some large manufacturer couldn't adapt. And they won. In 13 years they went well below half of what was smoked out, added new gases to regulations and the line of progression allows manufacturers to predict change and pre-design card for the next 10 years. Falling behind on standards slowly raises your taxes. After a number of years you either keep it on top notch shape and get a license for a classic car or you pay the value of the car every few months. It's how you make factories produce a car that has 4 seats, power steering, power braking, air conditioning, eat 5l/100 km (45 MPG) in town traffic and be twice as clean as before. For some 5000E (AC version is 6000. You also get additional lighting, central locking, high-load suspension, electric trunk, and so on for another 500E). It wasn't done by introducing less smokers.
And that's not all. If you want to sell to Europe (and you do), you need to adhere to the strictest of standards, they are *not* going to import lower quality. That forced other large manufacturers, like Asia area, to do their R&D. Toyota makes 106HP gasoline engines with 34 MPG city. The diesel gets 56 MPG. That's 1200 Km/full tank (13 gallons). Looking at Chevy, that's SF. The best they can throw at me is the Aveo (which is Daewoo Kalos renamed). The Spark is Daewoo Matiz renamed which, as a coincidence, was the 5000E car mentioned above, which, oddly, is not listed as an option for US.
This still my point of view and by no means the answer or the right way. I just think that the proportion limiting is like people were stepping on your foot and instead of making everyone wear boots then snickers then sandals, you allow people to wear golf shoes and stomp on your foot if they promise to also step on your foot with a sports shoe later on.
You don't help your health if you smoke a light cigarette each day. You help if you switch continuously (assuming you can't quit).
0 -
This is only of use if people are actually going to the centre. (See below.)
Indeed. If they don't then the issue is not number of people, but road design.
Another factor (and again I know this applies to Brisbane but it may be relevent to your city) is that cities were designed before cars and large trucking concerns.
Addendum. It is pointless trying to compare Europe to the USA or Australia. Size and population come into effect. As shown below.
Germany; Area-357,021 sq km, Population-82,422,299 (July 2006 est.)
France; Area-547,030 sq km, Population-60,876,136 (July 2006 est.)
Queensland; Area-1,730,648 sq km, Population-3,800,000.
Victoria; Area-227,600 sq km, Population-4,644,950. (2001)
Don't compare such areas, we never took into account the possibility that US will ever have German road infrastructure. We were speaking about localized concentrations, such as cities. Inter-city traffic is simple, as long as you build a wide enough, regulated enough highway.
There is another optionThis is another option to what? Forcing an average MPG only makes companies issue models. This is an issue for all people who drive, regardless, like taxes and limited resources, pollution. A person who wants to drive cheaper will buy a smaller, more efficient engine anyway, so what's the point? Forcing an SUV manufacturer to design and build city cars will only make for immense costs and lousy cars. How does forcing Porsche to issue a 1.3 liter diesel 911 help?
Unless you CAN'T buy an efficient car in the US (which is not the case - imports sell quite well), forcing small engines only helps bad management and marketing of car companies move with the already there market.
Large engine, huge guzzlers are already way more expensive than a small car, so people who afford and like SUVs will buy SUVs. Making gas more expensive can tone buyers down, but I fail to see how forcing US companies to also build smaller engines helps. It's not like there aren't any.
If you really want to trash SUVs, tax engine capacity, emissions, weight (road preservation) - or better yet - help those who buy smaller or cleaner cars as a push-pull system.
0 -
I think this is the source of the confusion between what we are trying to say. You are viewing it all from a single point of view: The Top Level computer.
Well indeed I do, while I switch points to get a true understanding, we must go back to the mainframe. Point here being, the mainframe doing the simulations needs an infinite time to simulate such a chain, so it can't be done.
Theorizing about how it *could* work and what corners we can cut, fine, but emulation of an universe in such a chain - no.
It is, however, entirely possible (ignoring probability for a moment) that we could be in a simulation if a smart enough computer is used, that only feeds us the data we use (I'd do it this way, at the scale we're talking about emulating particles would be suicide).
So my point is, if such a chain can't be really created, how can we theorize it will exist? It needs to be finite by definition, so it MUST break the chain, each simulation can't run like the other, after a few universes *something* has to break down.
Trying to pin down the actual speed of a hypothetical host that we have no what of know the specs on is pointless. Thinking about what it would be like if that hypothetical host existed, that is more constructive and the aim of this thread.Indeed it is, however, we are trying to determine if we are in such a chain. Point here being, if it runs a second of our time in *way too much* time, then such a mainframe can't really exist, now can it? Specifications are pointless if it can't be done.
This is why speculation is necessary. We can speculate that in a universe that 1+1=3 how would computing be possible. Could that kind of universe support complex life, could that life become intelligent, could that intelligence create a computer and could that computer simulate a mathematics vastly different than the mathematics that governs it's operation?It is. But this is not the point. Their physics need to be quite different to even simulate what we have here, let alone emulating each particle. It IS possible, perhaps even somewhat probable, that we are being simulated/emulated. However, being in a chain is (arguably) impossible.
If we just say: "Oh, I can't see how it could, so we may as well not try to understand or ask questions", we wont get anywhere. We will never discover anything.I'm not saying I can't see how they could simulate us, even if it's beyond our laws of physics. I'm saying that regardless of physics they can't simulate an infinite chain or even virtually infinite chain in a feasible way.
So are we discussing if this is a simulation (assuming one simulation - their physics is enough to simulate us) or a very limited chain(we could sim a small patch of the planet ourselves - then our simulations will probably stop at SIMS2)? If so, this is possible and open for discussion. However, if we discuss a parallel-running cascade of universes in which we are one of infinite, then I'd have to say no, even with different physics.
Even if the conclusion is: 'We can never tell if we are in a simulation or not", this is an important discovery. It places limits on various aspects of the universe, for instance, any anomalies are real physical effects, not programming bugs.I agree that we could never prove we are a simulation. Ever. Simply because what we get we assume to be real. Every definition of "real" we have would be from the simulation, thus any anomaly in the software we see as anomaly (or expected behavior) in the real life.
If one was to be born in a box and never step out, then he would never have any clue that there's anything outside the box (provided full isolation, like a simulation). Unless someone tells him, or he breaks out somehow, everything in there is the universe and aspects like infinity were never implanted. Generations after generations lived on the planet thinking it was flat, limited, and you could fall down. That's as far as they saw. Then they had a telescope and discovered it was round. Then they were enlightened. Now we laugh at them, because we have microscopes and telescopes and fuzzyscopes and say we *finally* have the answer. Statistically speaking, we're wrong.
Nobody in year 1 AD could possible conceive that gravity is faster than light. Everything they had around them could not point them to the truth. Such as we, drawing from what we see can't know what the reality holds.
I am not trying to pinpoint our location in a hypothetical chain, just that if one does exist then we can't be at the top.
Obviously, if we were at the top we'd be doing the simulation. If such a chain exists then we must be (temporarily) at the bottom.
If a cascading simulation runs, in which each universe has a lifecycle (it ends) then several could run in parallel - as computing power is available. Model can be 6 billion years to evolution, 1 billion to perfect simulation, then another 13 to destruction of the universe. In which case, after 6 bil another is launched, then after another 6 a third. In 2 years, the top simulation is dead and the child is then moved up in a chain. In such a cascading model, we could be anywhere. (we don't have enough info to determine a probability).
As we haven't simulated any universe, then we must be on the bottom rung. So if any chain does exist, then it exists above us. If the chain does exist, then we can therefore not be at the top.
I really should read ahead. Agreed, minus the cascading model. So basically we have
* A simulation can be made in which we are the simulated ones
* If the simulation runs in a chain model, the chain must be finite so that the host can be feasible.
* If such a chain exists, we can't be at the top (because if we are there chain isn't here yet). We could be anywhere else, including the bottom. Probability of being bottom varies with simulation model (we assume *only* and *last level* are not the same).
-- Anomalies
The reason we see the kinds of anomalies that you describe, is that the objects in modern computer games can be though of as a kind of Atom. They are indivisible components in memory and errors cause these components to be joined in odd ways.I am aware of that, still, it is unlikely that for 100 atoms it will have 100 threads each simulating the atom's behavior. Rather, once joined, they will behave as an object - having the object's properties.
Just like your LEGO example, except that when you join 3 blocks to form a pyramid, and they stick together, you have an object that has its own properties. It's easier to optimize it like that, sharing color and weight, I'm assuming that once joined into something those atoms will have shared traits.
For example, if you have 3 joined LEGOs of the same color you can probably store the color as "blue" over the same object. While it is imperative to do so if you run a simulation, it's not mandatory in an emulation (each atom on its own), but still I find it likely someone optimized the software. Assuming it never evolves.
So when traits are shared it's easy for an error to apply to a block of atoms. It's also likely the atoms/objects we are made of are stored close in the memory to each other - unlike other stuff. So it is likely that if an error affected a range of memory it will affect consecutive atoms/objects in the same body. Thus, a person could become a ghost (even if concurrent to other errors). One should not assume that by ghost I mean an ethereal being that defies the laws of gravity. By "ghost" I mean it could affect our brain so that we see shapes we associate or interpret as ghosts, it could change the refraction of air for a moment to make objects shift (like mirages).
Depending on architecture, errors could also be cancer by an overwrite of the DNA code in a cell.
Looking at the resolution of this universe, all these objects are specific amalgamations of groups of particles obeying the laws of the universe, an error would be in violation of these laws so the resulting artefacts would not be as coherent.A burst of unexplained photons, maybe, an image of a person (ghost) not a chance.
As we perceive as reality what we see, it might as well be something we interpret - not that it's real. Zombies attacking someone doesn't mean the software created a zombie, it might be that a commonly seen error in the procedure that does out brain has happened. It would explain why some things stick as reports (like ghosts) while other have been lost in time and are considered legends (werewolves? can't think of a better example).
I don't see larger artifacts as impossible. Though it's entirely dependent on platform and algorithm. Your view seems to favor atom/particle simulation, while I view it as wasteful since it requires far more processing power than the most complex of simulations - since it requires that everything should be processed at a lower level than needed. It is possible (never did an estimate) that emulating all atoms is as demanding as the sum of all possible simulations (optimized).
I vote for simulation. It allows you to model specific rules and correct errors. It also avoids the issue in which your planet never evolves or other random events you can't program in an atom simulation software.
0 -
Public transportation, as well as suburban lifestyle could be saved with building relocation. You can build an outer ring outside of the town that connects all living areas and those could have connections with a commercial, inner ring.
That way one could drive in circles (so to speak) to a large parking lot where they embark for the center. Once there, a small system of transportation could ferry citizens from point to point as needed, like an array of buses, mini-taxis and such. Those could be regulated to be clean (electric, bio, compressed air, etc).
While all the cars are still on the road, they drive on a huge circle outside the town instead of hitting lights and concentrating.
Only problem is the city is no longer a city, it's an industrial center with houses around it. When entertainment/better housing starts to develop on -say- east, then the center of gravity also shifts and the city relocates with a new center.
Out capital was build like that, an inner ring with all the hotels, administration, etc, and the outer housing area. The infrastructure was designed a long time ago, and it was for 200.000 cars. As the city grew (estimated 1.2 million cars on the road, 2 million in a few years), even the "outer" ring is crowded and is now the "inner" ring. A new outer ring was established, but unfortunately wherever you go you need to get to the inner ring if you go from one side to the other and the traffic is murder.
But in theory and with control it could work.
Assuming city organizing is an acceptable answer to gas prices. Which it's not, unless the tax is used to tone down the traffic.
I know that US is in an awkward position with the oil, but IMO price jumps (taxes or not) are inevitable. I also believe nothing will change until prices actually rise. I've never been there, but I do have friends over, but I understand that US has roughly the same problem as Australia and (partially) as we do: distance, Mainly due to large area cities. In UK (last time I've been there) the problems were more in the general area of infrastructure, rather than car floods (minus London). I have no idea how you're gonna fix London, sorry. Correct me if I'm wrong.
0 -
Sorry to troll, but I'd switch sides.
You will be in a position to explain how you refuse to save ten lives in the name of belief/misconception/whatever of a single (dead) person.
Also, you might want to keep up with technology:
* 2005 - Researchers at Kingston University in England claim to have discovered a third category of stem cell, dubbed cord-blood-derived embryoniclike stem cells (CBEs), derived from umbilical cord blood. The group claims these cells are able to differentiate into more types of tissue than adult stem cells.[wiki]
* 07 January, 2007 - Scientists at Wake Forest University led by Dr. Anthony Atala and Harvard University report discovery of a new type of stem cell in amniotic fluid.[1] This may potentially provide an alternative to embryonic stem cells for use in research and therapy.[Wiki]
Wiki sources have links.
0 -
The mistake that you are making is thinking that by creating a simulation within a simulation that more data is added to the parent simulation.
This might sound personal, but what do you do for a living? I'm not trying to sound superior, but this is theoretical physics applied to an informatics problem.
You are thinking of a perfect emulation of every particle which is not really going to happen. If you think that the host computer emulates all known physics, then this reply is void, it's a different discussion, you can skip ahead.
Nothing is free. While in theory whatever resources were allocated get used for a simulation, in reality cycles need to be allocated to run the simulation. This runs in parallel with other stuff, but in order to work it has to run, no corners cut. It ads workload to the system, as new data to be processed. Limiting the virtual machine simply pushes the load further away, out of first glance. It will translate into load.
Let me do a simplified example:
We create a VM for a human. We allocate all cells, all atoms, then we give him a workspace that runs various threads - breathing, pumping of blood, etc. We run all that in the brain space.
Now our human gets the number of molecules X to build a PC. We allocate those, and design physics. White, hard, etc. When that computer starts playing Q3, our host computer has to render those operations, because the computer is simulated. It's not real. It's a simulated screen that feeds a Q3 image to our subject. The host runs physics, the human, the computer and Q3.
Now emulation, that's different. True, in a fully emulated environment whatever happens in that universe is (almost) free.
The thing is, even if they did this, the top level simulation would not grind to a halt.
That's a technicality, they simply pre-allocated the resources. They reserved the space for that program, whether they used it or not.
Nothing is free, again I say. You get 3 billion operations a second and not a tad more. If you emulate a computer, you use up that amount of it. That computer runs anything, those instructions have to run SOMEWHERE. SOMETHING must tick or flick to make a 1 to a 0.
I can write a loop that does a line hopping up and down, one pixel a second. Then I make that line hop up and down twice as fast with no additional gain. How does that work? Well, either
a) I was wasting processing power, waiting for each image to draw (so the line doesn't bounce 1.000.000 times a second
b) I'm skipping steps. The second line hops 2 pixes each time, but it's a simulation and I'm cutting corners based on human perception.
If one could run something inside something else with no penalty then the first emulation is free? How can that be? How can something do something with zero cost?
It is running the simulation for all the "Cells" in the top level that are being used to make the child computer and the data storage too. This would be happening even if there was no simulated computer. It would take the same amount of storage space and the same amount of processing speed, regardless of the number of child simulations or child's child simulation that exist.
Wrong. You almost get the point, but steer away at the last minute.
Host 1 = 1 Kb.
Life = 1Kb
One host that can run Life, life or not = 2K (self plus the life game, regardless of it running).
Now add another host.
Host 2 = 1K
Host 1 = 1K
Life = 1K
A host that could run a host that could run Life is 3K, regardless of host1 running and Life running. It's the cost that ran out of view from the cost of processing power to the cost of resources needed and the cost of programming. Remember, in order to reserve the space and instructions for Life, Host2 needs to be ready to run either Host 1 or Life or the combo. The absolute cost is still there.
If Host 2 doesn't already allocate the simulations, then it is just a host ready to run whatever. It's 1K and runs fast. When someone starts Host1, it expands to 2K and slows down to accommodate the load of Host1.
What you are getting confused about is when a computer runs a recursive algorithm. FOr this to occur, each instance of that algorithm needs to be created on the parent computer. This type of recursive processing will need more space on the parent computer, but the simulation within a simulation will not. The simulation within a simulation is not a recursive algorithm (you could do it with recursive algorithm, but it is not necessary to do so).
All due respect, I'm not confused at all, you are confusing costs. You actually claim with a straight face you created a perpetuum mobile and don't even think it's in any way wrong.
All you do is shift the load. A recursive algo that runs on host creates load and memory usage on host. An algorithm that runs on the guest creates a single load on the guest that loads the host. If this wasn't true then I could use a 300MHz computer that can run a guest at 30GHz. What's bothering me is that you see nothing wrong with that. If the computer at 300 can emulate the guest at 30G, then that can run at 30G. Point is, it can't. The 30G is not really 30G. If it loads at <250 MHz, no load is translated. After that, it reaches the max host (minus emulation cost) and the guest starts grinding down because it's asking the host questions the host needs to answer.
It's really odd, at some point I couldn't follow you any more. You say we can build machines and run simulations at no cost, then you say each one is slower or needs even larger approximations. Yet you say the host doesn't grind down.
If you create a subset, and the computer we run in has this fixed speed that emulates us to the particle then we can build very fast computers but not faster than the host, because nothing in this universe goes faster than the host. We grind down. Our simulation would grind down even more, because their celestial limit is lower.
It's the same as the host grinding down when running simulations, except you fix the other point by moving your frame of reference to this universe. If you are in simulation nr 7, then your children slow down, but the host doesn't. If you move to universe 5, we have slowed down. The ripple hasn't changed, you just jumped frames. When you say it doesn't grind down you reference frame the host0's CPU. That never grinds, it ticks at whatever. Your frame of reference should be the universe it's in, that is, the Elder Programmer. From his point of view, each emulation grinds further.
IF such things existed, then they would be these "anomalies" that would allow us to detect that we are in a simulation. So a good systems designer would have accounted for these artefacts and implemented techniques to eliminate them (like making sure that you can't have i=1000).
Even if not and we run a buggy system we wouldn't know because the actual software is our reality. If 1+1=3 and space-time folds then we get a black hole that's actually a bad memory slot you can't go to or you get lost. Your information is simply lost and instead of shaking our fist and yelling "fix it you lazy bastard" we look there in awe and devise laws to cover it.
Our "correct" is based on observation so if we had legs on out torso and hands for walking we'd sit there wondering if we are in a simulation. We wouldn't know, regardless of "quirks".
Turning machine is capable of simulating any possible computer. This doesn't say "any possible computer that is not made of chocolate.
Hehe. You need to stop thinking physics and start thinking computers. Nothing is free. A Turing can computer anything another machine can compute as longs as it has unlimited resources. When resources are limited, and they always are, it starts to fail as a model.
I can compute by hand anything you can compute given infinity as resources. If I have an infinity, I can get a degree in whatever you have and do whatever you do. Things don't look so bright in the real world.
I am not trying to make this personal, sorry to stress this, it's just that I'm used to using "you" in examples. Point is, just because something CAN be done, does not mean they are equivalent. Time is also a cost and that's precisely what I was going on about.
You can simulate a Quantum computer of your desktop computer (it wouldn't run nearly as well as a real quantum computer though), even though the physics of their operation is completely different.
Precisely. The quantum computer would run a one billionth of its capacity and the next emulation would be even more crippled. Move your frame of reference again to quantum computer nr. 2 and you run at full speed. But the host (PC) ground to a halt.
If the universe is computable, then we can simulate it. If the universe is not computable, then we can not be in a computer simulation of one.
This is (excuse me) a narrow minded approach. There are many shades of gray, you can run a universe on a beefy PC if you sacrifice something that's not needed. Like, the size of the Universe. Or speed. Or whatever else.
From what we do know about atomic particles, all particles of the same type (electrons, etc) are identical.
We will most likely evolve beyond that. In theory, we could be fed random data every measurement to keep us from evolving thus limiting out simulation to something the host can run to a target speed.
A valid point (assuming we get stuck at this level). Theories we have, proof we don't. One could say that gravity was meant to be instantaneous, yet we observe CPU speed. We could also say that because gravity is at a more basic level it's faster. Light gets simulated next, then the rest at sub-light speed as we run in a dynamic, law-based loop, opposed to light/gravity that are embedded in the kernel so they always have the same speed.
If it can't create simulations, then it just cuts off the chain
If it can't emulate an universe of the same complexity as ours the chain will be cut sooner or later. Which was my point. Force frame of reference as the host machine and you run full speed. It's just that the last ibiverse grinds to a halt. Move to the last universe and the host machine spins like crazy, however, the time in the first universe also spins like crazy so the universe probably ends really, really fast.
So, either it is the top level universe, or it is not our universe. Either way, if we consider this as a potential variation on the universe, it just strengthens the chance that we are in a simulation.
How do you glue these things together? The only thing all of the above says is "if a universe chain exists as above then we are probably not at the top". That's all. The chance of this chain existing is non-determinable thus the chance of us being in universe 1, 4 or 3 billion is non-determinable. No higher, no lower.
This is not the same as saying "we are either at the top or in a simulation". We could be in the only real universe, as well. We could be spinning inside a giant's blood cell. We could be someone's thought. We could be someone's dream.
Also there is no Infinite recursion, unless the top level simulation has had an infinite time running the simulations. I am not talking about infinite recursions (I have even mentioned infinite recursions), just that you can get recursive simulations in a simulated universe. These would be a finite number of recursions.
It can't be infinite (in spite of the definition of recursive simulation) with the given data. It could be infinite if the host is running everything.
E.g. when we press play on our simulation, our universe gets paused and saved (or destroyed) and that simulation now runs on the host. This could happen infinitely.
What we would not see virus as is "Giant Bugs" eating the solar system. What we would see is nonsensical results in the physics. Parts of the universe behaving randomly, matter and energy would be scrambled. It would be the equivalent of "Static" on a TV screenThat is not necessarily true. You need to take vectorial and objectual(?) concepts into account. When a virus attacks a game, you don't see static. What you get is an error because the system is protecting itself by watching over the game's shoulder and when it sees that it has run amok it kills it.
On a dedicated mainframe there are no such safety system, as there are non in dedicated hardware. That's why in a game with a graphics driver bug (the GPU on the card has no system to watch it) you can see the floor disappearing from under your feet and you don't fall. You could also see people running around with flower pots stuck to them, continuously exploding, bodies floating around, looking dead but still shooting.
Giant bugs and sperm whales maybe no, I was cracking a joke but floating people? Sure. Non-accessible areas with no laws, like black holes? Yes. Ghosts? Sure.
Actually, if it's a simulation, it's likely we get temporary errors like looking with the corner of your eye and seeing a man with no head yet when you look again it gets re-rendered with another part of the mainframe (it gets relocated due to error) and you get the correct image. It is also possible that memory corruption can cause people to malfunction, developing obsessive behavior (stuck processes), forget obvious stuff, inability to find objects in your view (they aren't there) and so on.
Scary, though, these things *do* happen. And if the simulation has good error recovery and sanity checks they self-solve. Also, out brain has been pre-programmed to EXPECT things and FILTER things. There have been experiments that have proven this to be true. When you concentrate on some things you are oblivious to everything else (including a woman in a gorilla suit). You'd think you'd notice a woman in a gorilla suit passing by? Think again. We all think that if we saw a sperm whale in our bathroom one night we'd remember. I'm starting to doubt.
0 -
2. Public transport is only an answer when the majority of people work in a localised area.
[...]
3. The idea of car pooling is a furphy.
Both valid points, but even though carpooling is smoke, public transportation does have a valid point, that is, it can take the load off the road and gas when loads of people move from point A to B, which DOES happen. While not a solution in itself, it does lighten the load.
IMO, gas taxes are idiotic as long as a valid alternative isn't here. Leave it be, people will adapt when things run out, as we adapted to other factors. Nobody drives hydrogen because there are decades of gas left. What's the point? I mean, if it's the last bread on Earth, why eat it half-way? Preserve what? We'll have to eat something else anyway so why not finish it?
Why raise taxes artificially to make people move/draw money/whatever when you can just wait it out a few years more? Oil will run low, prices will rise, people will start adapting cars and/or abandon them for other means of transportation.
This tax along with others long left the realm of a way of drawing money to feed police and army. It's a control tool to try to level people out as much as possible. Why not switch to communism and get it over with?
Taxes I understand. Selective taxes for other purposes than building houses and providing services to the citizen I don't. Maybe politics work differently where you are but around here I can't help feeling like I borrowed money to a friend to get me a bike and he claims he no longer has it because it spent it in the lolly I got for my birthday.
If it were up to me all politicians should have minimum wage with no possibility of other finances and a strict check. Politics is for those who aim to do good, not for getting rich. A generous pension should await them upon retirement as a thank you. Live like us, understand, act. Then you get paid. How does a million-dollar-a-year-salary help anyone understand what gas price is and what "need to drive to work" means?
Sorry about ranting.
0 -
It is. They freeze-dry instant coffee as an example.
Freeze drying is used for things that don't boil well.
As for your beer minus a few degrees is what you need, as water most likely freezes first. A normal fridge keeps around +4 degrees, a quick-freezer (icecream, etc) -4 to -8 and a deep freezer (the one that keeps meat 6 months) about -20 (all figures in Celsius).
I believe your beer made it to under approx -10. (starts to freeze at about -6)
0 -
Back to filling the box with foam/hard substance?
I'd go for complete force spread over the surface over the braking of the egg. That is, I'd feel safer with an egg that's perfectly fit in a metal mold (theoretically speaking) versus a suspension/breaking system with a simple brace.
You can't really break an egg if it's contained. Whatever forces the contents apply via inertia will have to squeeze wall molecules apart before breaking the walls, which is probably *way* over terminal velocity.
So I'd say get a 10x10x10 box, fill it half way with something hard, non-adherent (oil the egg if needed be) and put the egg half-in. Once it hardens, you have half-of-cast for the egg. Then you can either make the other half the same way or use a sheet of paper over the assembly and make the other half over it (paper will rip when opening the halves and allow retrieval of egg).
Now all you have to do is make sure the cast is actually touching the egg over most of the surface. Use something with high viscosity if the mold shrinks, like honey (i know it's a food), solid hand lotion, motor grease, etc to keep the egg walls in contact with the mold. That should hold at the box's terminal velocity?
Now all you do is make sure it doesn't come apart in flight or at landing. Rope? Plenty of duct tape?
P.S. The egg will be OK on the outside but you might have an omelet inside at high velocities.
0 -
If all we had was 1 second, we could not simulate much at all. A recursive simulation can not run faster than the parent simulation.
Awww, a child simulation. A typo threw me off to recursive algorithms.
Well, it's all relative. You see, simulation can be child-based (simulation running a universe that itself has a civilization running a simulation). That does not mean that it runs THE SAME simulation. It can't.
It's like a camera that films its own output. It then sends to the monitor, which gets filmed, which again gets sent in a fractal manner down to a single pixel and no more. Point being, you can't do this forever. At one point the simulation breaks because nothing is infinite, especially in a computer.
What happens is, your write Universe.exe and run it.
Universe.exe runs 1000 years and hits the development point where the child universe spawns a Universe.exe. At that point, data is added to the system, since those cycles can't be skipped, adding to the load. Be it linear, exponential, or of a different order.
Universe.exe runs another 1010(2000? million?) years until the child universe runs another Universe.exe. More data is added.
The simulation grinds to a halt. Each cycle is longer, to a point where it doesn't matter.
Let me simplify.
---
If the simulation runs very fast, say, 1 ms until it evolves and spawns a child,
then the next ms it has 2 sims to run. The third has that too and so on, so after one second you already added 1000 simulations. Next second you do the same. Problem is,that program never completes it cycle.
Fast forward to real time and you see it freeze. Infinite recursion and locked loops are what we call "frozen" applications.
If course, in real life what you get is a stack overflow/memory error reeeal quick.
It would be like using your PC to emulate a Macintosh which is emulating a PC. that final PC emulation, even though the original PC was fast, would not be anywhere near as fast as the original.But that PC runs a mac that runs a PC that runs a mac. When you hit the key it "freezes" because it's locked into a pointless and neverending loop of creating children. It doesn't run slow, it freezes. Remember, this isn't fluent, you need to quantize to compute.
Well nothing in a simulation can run faster that the original computer.Oh yes it can, if SIMULATED, you can simulate only what's in use, simulate a different thing, etc. Or if the EMULATED universe is less complex than the original. Their universe may have billions of levels of subparticles / billions of known macrouniverses. We are stuck in a simulation and when i = 1000 then i := -1000. Then we stand there, wondering how traveling in a straight line gets us to where we started.
So even if we did use Quantum computers (and we are in a simulation), the host computer would still have to be faster as they are simulating the quantum effects of the entire universe.This is an endless argument. It only needs to simulate AN universe, not THE universe. What we see as uber-complex quantum physics might be childplay in another - or a beta test in another. It might also be very complex because we try to explain what was created using Random(). We live in an overcomplex universe. Real matter is formed from basketballs. Who knows? Chocolate?
Also, we only access one billionth(ththth) of the universe at one time. We can't check to see if atom nr 2005 matches atom 1*10^42342452454. We can't. Also, we draw laws from what we see. If some idiot declared speed as 16 bit unsigned we can't travel more than the speed of light. We don't know it's a bug, we find convoluted explanations and create laws that help us understand how it works. And we succeed, because 2 points determine a line. 3 points determine a circle. What we do is draw the circle and look for an explanation as to why they are there, and WHY A CIRCLE? We invent the circle, write an equation. Use 4 points. There's an equation for that, too. 4 billion points? That too. All you have to do is plot random numbers, seed an AI and watch it make sense of it. Maybe we're doing someone's homework.
There is no relation between the runners of the simulation and the universe inside. The water-down of the simulation also explains the infinite recursion. Each recursion is simpler to a point where a certain level fails to run a successful simulation. Most likely chatting over a forum saying it's impossible to do
they might shut it down because of anything.We wouldn't know we've been swapped to disk.
As for virus protectionWe would have giant bugs eating solar systems hole, die of spontaneous combustion, morph into sperm whales then we would be restored from a backup and not remember anything, not even realize it was interrupted, let alone we came really close.
0 -
Out here it's affectionately known as "varnish". What you do is get a tube of plastic, in a small diameter, then slightly heat it and stretch it to a larger tube. When heated, the plastic reverts to its original size.
It's used to secure wires. Instead of duct taping them (which leaves glue all over and they age and falls off with heat and water), such a tube is inserted over the wire and after soldering it's pulled over and heated. The plastic shrinks and looks just like the original isolation, protecting from hat, water, and non-aging.
0 -
You can modify the composition of the water for buoyancy.
0 -
wait the dimensions were just 10X10cm that sounds like the dimensions of the base, not the height.
if you can have an unlimited height, I'd say put in a strong metal rod in the center of a metal base (you could probably find such a construction somewhere.
I assumed it's 10x10x10. If no height, you use a string and make a yoyo.
The egg doesn't have to be vertical so it takes up so much space.I just thought that if no liquid is used, only half of the egg is taking the pressure and the egg is *much* more resilient on the pointy end. Position is irrelevant in water, obviously.
the benefit of packing it in water is even distribution of pressure across the whole egg. its virtually impossible to break an egg with even pressure.My phrasing was lacking. It should be: Due to the air bubble in the egg (which is compressible), it is marginally better in theory to use an egg encased in solid rather than a case with a liquid. I doubt it will make a difference in practice, this is only theoretical and only marginal.
0 -
Water would work, equalizing the pressure, just as solid encasing would too (same effect, more math with water). I just don't understand how papers would. Surely the weight distribution is not that even?
Maybe a tightly packed cardboard?
--
Now that I think about it, solid encasing is better, since the egg still has a little air bubble inside it for the infant to breathe before hatching (so I've understood) so the pressure from outside would crack it (this is in theory - doubt it will at 6 m fall).
0 -
Instead of absolutes, we need to look at probabilities. If we are in a simulation that is being run at 1 frame every millennium, then there are not likely to be many recursive simulation
You lost me. How did you jump to that conclusion? I fail to see any link between recursion and speed of execution. We could be anywhere.
It could also explain [...] paranormal.
I wasn't aware that there was something to explain. No such thing was ever documented, so how do you explain something that doesn't exist?
0
Racial differences.
in Genetics
Posted
Smaller brain <> stupid (or != if you prefer). If that were true, whatever we don't understand about time/space we could ask an elephant. There is a lower limit, such as you can't be REALLY smart if you have only 2 neurons, but the reverse is not true. It's like saying some people are stronger because they weigh more (weight lifter versus couch potato).
As I've said, if you average IQ (I picked IQ because it's numeric and can be averaged) then black IQ would be a number, white IQ would also be a number and they will not be equal given enough decimals. You can then say that black's smarter than white or the other way around. Whether this is genetic and permanent is quite another story. If figures are close, they could be invalidated by the time you compute the average.
I sincerely doubt enough research has been conducted to prove such findings because of the sheer number of people that need to be tested and the virtual impossibility to devise a test that fits the culture differences and be comparable. All IQ test I've seen so far require basic maths, basic social knowledge of a certain culture and, at the inconsiderate and incompetent edge, knowledge of US measuring system or English literature. None of these will ever work in South Africa. They need a native language test that uses localized questions to be even close to comparable.
I'd go as far as to say that (can't prove but will support) it's impossible to devise a test that can compare black people to white people and be precise enough to take into consideration.