Jump to content

TakenItSeriously

Senior Members
  • Posts

    511
  • Joined

  • Last visited

Posts posted by TakenItSeriously

  1. They don't say if this process is a hypothesis, has been done in a lab, or if it is ready for the big show. I suspect implementation will require significant engineering. With that kind of economic benefit, it seems at least one power company would be ready to sign a contract, and it might save some of the oil market. Some financier of climate deniers should be ready to foot the bill to bring this technology on-line.

    It sounds like they ran a proof of concept but it still seems too good to be true.

    To make sure that the idea isn't too good to be true, in a new study the same researchers have performed a thermodynamic assessment of the proposed CC CNF plant. They found that the concept is economically feasible and even improves the power plants' energy efficiency.

    ...

    Currently, the researchers are working to build and implement the technology as quickly as possible.

    "We are quickly scaling up the process, which is the challenge to rapid deployment and substantial CO2 reduction," Licht said.

     

    23,560% boost to revenues while completely eliminating the polution CO2 emmission free.

     

    Thats the first time Ive ever used 5 figures for a percentage incease.

     

    It also burns methane which seems like it could provide another benefit reducing methane as a greenouse gass. Perhaps create a process to collect the methane from sewage treatment plants or old garbage dumps, that would otherwise allow it to escape into the atmosphere as a greenhouse gas.

  2. My experience with running business data processing programs on parallel computers suggests about 10% additional code is necessary. Reducing that 10% by 1/10th is significant but not remarkable.

     

    Pipes per core for Multiple Instruction Multiple Data (MIMD) systems. I think the term core was invented because processors became faster than memory and one processor with two sets of registers can process two streams of data. If one processor works on two streams of data by time slicing, is that two pipes per core or one? A programmer doesn't need to know pipes per core; it is an issue that hardware designers manipulate to maximize processor performance. If a processor can run 10x faster than memory, then it can process ten threads of data per core (effectively ten pipelines per core). In addition, a processor may have one or more asynchronous processor to do floating point, which means there may be both integer and floating pipelines run by one core. Thus, one core may process several threads and multiple data types in parallel pipelines. Hardware can make parallelism decisions better than people, due to real time data conflicts, and the added complexity of parallel programming can be too difficult for people.

    It actually said requres 1/10 the code . I could see it as being a typo, but the way it and emphasized easier programming, I'm not so sure. I thought that it was finding efficiencies by using the same piece of code in 10 pipelines instead of ten seperate threads? But really, if its doing the same process, then all ten threads could call one subroutine.

     

    I'm not sure I understand how a slower memory could provide greater efficiencies for parallel pipelines. It seems like more of limitation of effectiveness if the memory cant handle the extra bit rates the parallel pipes can process

    The paper is here: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=7436649

    And the news story: https://www.csail.mit.edu/parallel_programming_made_easy

     

    So it doesn't sound like there is anything very novel here, in terms of concepts. Similar methods have been implemented on standard hardware before . But with modern technology you can put a lot of these processors (and a lot of memory) on one chip.

     

     

     

    Because in parallel programming there is typically a large overhead for creating tasks, synchronizing and scheduling tasks, communication between tasks, etc. Also, explicitly parallel programs are more often more complex because they have to handle splitting the work or data up, and then recombining the results.

     

    Where a processor has hardware support for these things, then you can write much simpler code.

    Oh. I see, its putting much of the code into the hardware.

     

    Thanks.

  3. The trouble is .... " the human species is on their list of animals set for ..Extinction.. "

    Mike

    Well the trouble with over population is it creates many many problems that individually dont seem that threatening but put them all together and they can have a devestating compound effect.

    • We're slowly changing the atmosphere as oceans are producing less oxygen, more methane.
    • Oxygen levels have already dropped quite a bit over the past century, this century?
    • Similar issues with the melting tundra, deforrestation, dying coral reefs, increasingly worse forrest fires, etc.
    • We're efficiently destroying the food chain as our population continues to grows at an exponential rate.
    • The previous century took us from 1.5 Billion to where we are now. This century?
    • We can expect extinction rates to climb even more dramatically as climate change becomes increasingly worse over the next 50 years meaning unpredictable extreme weather patterns creating natural disasters that get increasingly worse every year after year.
    • Satellites under threat from space junk creating a chain reaction could have devastating effect on our technology and economy.
    • infrastructure in the worst shape ever with no money to make the repairs?
    • fracking permanently poluting the water table in many regions
    • Governments too polarized and too corrupt to be effective.
    • We could loose much of the grain belt to climate change.
    • Over fishing, with poluted and dying oceans cant be sustainable for long.
    • extinction of the bees could takenout even more crops.
    • So we should allready be in pretty bad shape when the inevitable coastal flooding creates tens of millions of displaced refugees around the globe raising social stress to unprecedented levels possibly throwing many parts of the globe into anarchy.
    • Topple a few of the wrong governments where nuclear arsenals get pillaged...
    • Super Volcanoes which are normally a low risk individually, but with three that are over due with maybe a 100 year window where we will be extra vulnerable against bouncing back, and suddenly the odds are not looking so safe.

    I don't think such a graphic would tell you anything much. Especially as there is a lot of evidence that we have been having a significant effect on the environment.

     

    However, this is vaguely relevant and quite fun: https://what-if.xkcd.com/8/

     

     

    ^Funny, I enjoyed it^
  4. I've simulated parallel processing with using timers with AHK scripts when creating HUDs for online poker, which was kind of cool. It wasnt for efficiency but for continuous scanning for a number of trigger events over multiple tables. It just qued up the commands in a single pipe though.

     

    Just curious, are the number of pipes limited to the number of cores you have available?

     

    Also, I don't understand how it could require 1/10 the code.

  5. You've gotten some good questions. First off our most likely end of death for the universe is heat death. If the universe continues to have accelerating expansion.

    By heat death we mean extremely cold. How our universe expands is determined in a large part by thermodynamic interactions. As the universe expands its overall density and temperatures decreases.

    What we refer to curvature directly relates to our overall density compared to a critical density. The critical density formula being.

    [latex]\rho_{crit} = \frac{3c^2H^2}{8\pi G}[/latex]

    This however is prior to the cosmological constant. (Further detail on curvature including distance measure can be found here)

    http://cosmology101.wikidot.com/universe-geometry

    Page 2

    http://cosmology101.wikidot.com/geometry-flrw-metric/

    Those two pages will explain how curvature work with distance measures in the FLRW metric.

    Now each contributor to total density has its own equation of state.

    Further details here.

    https://en.m.wikipedia.org/wiki/Equation_of_state_(cosmology)

    Note radiation exerts a pressure influence but matter does not. The Higgs field can use the scalar modelling equation on that last link.

    One of the key equations of the FLRW metric is the acceleration equation is given as

    [latex]\frac{\ddot{a}}{a}=-\frac{4\pi G\rho}{3c^2}(\rho c^2+3p)[/latex]

    This leads to

    [latex]H^2=\frac{\dot{a}}{a}=\frac{8\pi G\rho}{3c^2}-\frac{kc^2p}{R_c^2a^2}[/latex]

    With above equations ( including links) you can calculate rate of expansion with whatever combination you desire.

    There is another detail. As the universe expands the density of radiation,matter and the cosmological constant fall off at different rates. The region under the square root of this equation.

    [latex]H_z=H_o\sqrt{\Omega_m(1+z)^3+\Omega_{rad}(1+z)^4+\Omega_{\Lambda}}[/latex]

    Show that matter and radiation density while decreases the cosmological constant does not. So even when neither matter nor radiation contribute to expansion the cosmological constant will continue to do so.

    Ok but my question about radiation is not so much about the rate of expansion but if there was some other mechanism I am not aware of where radiation can dissipate without any mass in the universe. Because, based on what we know it seems like radiation is always going to be around and I don't understand why the current end of the universe models don't account for perpetual ever-expanding radiation?

     

    Previously, I used to think of the universe ending when all matter was gone. Now I don't know what to think.

  6. One can use the FLRW metric to model any form of homogeneous and isotropic universe. Matter only, Lambda only, matter removed, radiation only, No dark matter or any other combinations.

    The usage is to isolate the influence each has on the expansion and contraction rates. The results are surprising as it shows that even a universe comprised entirely of just matter will expand. Just as a universe of nothing but radiation albiet a different rate.

    A good detailed coverage is Barbera Rydens "Introductory to Cosmology" she details every combination.

    Each combination can be done on any curvature flat, positive or negative curved.

    Thanks Mordred, modeling the universe as permutations of it's constituent parts makes a lot of sense.

     

    ...will expand. Just as a universe of nothing but radiation [would,] albeit at a different rate.

     

    Radiation expansion brings up a question about the end of the universe.

     

    In an ever expanding end of the Universe scenario after the last remnants of the last BH disappeared with a bang and we are only left with radiation, I've heard that it is supposed to eventually disperse into heat. But, without matter there doesn't seem to be a mechanism to absorb the radiation and turn it into heat.

     

    So, is it correct to assume that the universe could never end and would expand forever as radiation in a steady state expansion?

     

     

    The results are surprising as it shows that even a universe comprised entirely of just matter will expand.

    Could you expand a bit on how it was surprising?

     

    e.g.

    Did matter alone expand at the same rate as our universe?

    Or Did matter alone expand at a rate faster than what DE predicted?

    There are various reasons people might do it.

     

    The Milne Model was an early attempt to model the universe, but is contradicted by observations.

    https://en.wikipedia.org/wiki/Milne_model

     

     

     

    Other vacuum solutions are just ways of exploring different cases. There are a limited number of exact solutions to the Einstein Field Equations so I think something can be learned from any of them, even if they don't describe the universe we live in.

     

     

     

    The presence (and distribution) of mass and energy affects the behaviour of space-time but doesn't cause it to exist.

     

     

     

    No. 'Cos it was empty. :)

    Ok, but to be clear, are we talking about a universe such as Mordred's example where at least one thing was still a part of the universe that was studied, be it radiation, lambda, matter, or DM.

     

    Or are we talking about a universe that was truly empty of everything.

     

    By the way, I mentioned the Higgs field as an after thought because, in theory, it is supposed to be everywhere. So I wondered if the Higgs field could define spacetime. It would be a single contiguous field that would contain all of the particles and all of the connections as FLRW defines them.

     

    Also there are clearly problems with obtaining information about spacetime outside of our light cones, but it seems like their is a need to do just that. That's because there is a need for the universe to be homogeneous and isotropic for things like inertial frames to work.

     

    To explain:

    I assume that homogeneous and isotropic is to ensure all physical laws remain consistent no matter where we applied them. But what if an observable edge was very near to an actual edge of the universe or located next to an improbably large and dense cluster of galaxies. IDK, Whatever is large enough to break the local laws of physics. Then it seems like there is a potential for creating paradox where light speed is violated by information within our domain.

     

    I see potential solutions using something like a Brane theory but that's getting into speculation. But I do see a definite need for a single field to be everywhere and the Higgs field seems to fit the bill. Also it imparts mass which could make it related to a quantum loop theory...

     

    Sorry, I just had an epiphanic moment and I didn't want to loose my train of thought.

  7. There are solutions to the Einstein Field Equations that describe an "empty" universe. The properties of such a universe can be studied (and compared to ours, for example). So the idea that you need content to define a universe appears to be false.

    I'm having trouble understanding what an empty universe is. Or why they would try to distinguish it from nothing, was it done arbitrarily or were they trying to predict what would be left of a failed universe that recollapsed back in on itself.

     

    Could we assume that an empty universe was an appropriate application for einsteins equations? Didn't they predict that space and time was created as a result of the first matter particles coming into existence?

     

    I also can't imagine what the properties were that could be compared to ours. Was it flat, open, closed, expanding, contracting, homogeneous, isotopic, etc. does any property besides steady state even make sense without matter?

     

    Can you tell us how they define an empty universe, or why they would want to define an empty universe. And what it's properties are?

     

    Did it have a Higgs field?

  8. But if one consider that velocity has no limit that becomes untrue. For infinite velocity, a particle can be at 2 places at the same moment.

    Nice! This is a good explanation for why their must be a theoretical maximum on speed.

     

    Edit to add:

    Upon reflection, while it does sound like a paradoxical result, I'm not certain that it actually leads to a paradox.

     

    The dual existence could only last for an instant in time like teleportation. One could imagine teleportation without creating paradox. For instance it doesn't violate cause and effect.

  9. "Just a theory" is a red flag of a phrase in a science discussion. It suggests you don't understand what a theory is, in a scientific context. (it doesn't mean guess, it doesn't mean untested)

    I'm not sure how that comment applies to my understanding of what a theory is.

     

    I did misunderstand the derivations for Planck numbers was what linked their values in the first place, which was never an issue for me as an engineer.

     

    So When Imatfal's, mentioned that time and distance were equal for planc units, at first, I didn't notice that it was due to how they were derived and I erroneously initially thought the matching numbers proved the link of space and time which made me question why it wasn't considered a law or principle. though I corrected that oversight a minute later.

     

    Perhaps I should have deleted the erroneous statement but I'm in the habit of forums permanently reflecting errors in posts. So I added corrections for the comment instead of replacing it which may have confused you somehow? IDK.

     

     

    I can assure you that I have a pretty solid understanding of what qualifies a theory now vs what may have been required earlier and why the shift in requirements is important.

     

    It essentially amounts to validating a new theory vs adding new portions of theory for completing a theory.

     

    Solving problems that existing theories fail to solve while remaining consistent within the complex set of theories already vetted over time is self vetting for the most part. It is the complexity and the broad range of vetted theories that it needs to conform to that makes solving multiple problems, while conforming to known data and doing it in a consistent manner a very difficult task to achieve with an invalid theory.

     

    It's just fitting too many variables that makes it self validating like matching 15 points on a finger print.

  10. Is that accurate? (the bold part in the above)

    In the example of the cards in envelopes, I don't think it is literally true. Though to be fair, Delta did use the expression "taken at face value".

     

    It would only be true if there were no way possible to determine the contents of the envelop through cheating, such as switching the envelopes with envelopes where the contents were known, but that would assume perfect security which doesn't exist.

     

    We often think of it as being hypothetically true because if no one was cheating, then there would be no way to confirm or deny the contents and it would be an equivalent condition of being impossible to know, but you can never assume no one is cheating with absolute certainty either.

     

    Which BTW, is why you should never use a scientist to debunk a con artist. They take for granted too many hypothetical conditions.

     

    I keep thinking of the gamblers fallacy where if you flipped a fair coin a thousand times and it came up heads every time, the next flip should still be 50:50. But that's only the mathematician believing it's a fair coin because he was told to assume it was fair or it could be a fair coin but the coin flipper was using slight of hand to control the results. In any case, believing the result should be 50:50 after results showing heads a thousand times in a row, is clearly foolish reasoning.

  11. The ratio between distance travelled by light and the time taken is 1 in Planck units - so to an extent the above is true by definition; the top of the equation (distance in a lightyear) and bottom of the equation (time in a year) will be the same magnitude so that the ratio (ie speed of light) equals one

    You're kidding! Wow, how is it that the linkage of space and time is still just a theory if that's the case?

     

    Unless it's based on how plank units are derived. Like if plank distance was derived from plank time and C and perhaps make it a circular result?

     

    Oh, wait, that's what you were saying at the beginning. Lol. I was thinking they were independently derived for some reason but that doesn't seem reasonable.

  12. why is the speed of light 299,792,458 metres per second and not, for example, 4529553 m/s.

     

    If you're looking for significance in the number itself, such as finding a perfect number or a factor of pi or something, don't forget that the units are arbitrary so the literal number "299,792,458 m/s" is going to be arbitrary as well.

     

    Who knows, if you based C on Planck units and analyzed the number, maybe something would reveal itself, unless it's already been done.

     

    Imagine if a light year was the same number as a year in Plank units and no one ever bothered to check.

  13. But you need a pair of universes for every pair of entangled particles.

    Not really, all entangled particles create a single indeterminate universe till the point of observation defines the event.

     

    Edit to add, if you think this is intriguing, the model is more than just a differential universe that completes QM. For instance all charged particles become dipoles across time, therefore gravity becomes a loop that stretches across world lines. So we now have a mechanistic explanation of a macro version of the quantum gravity loop that is required for linking QM to Relativity.

  14. This sounds like the Many Worlds interpretation. Is that what you mean?

    In a sense, only instead of infinite realities, only two realities that are equal and opposites are needed.

     

    Think of it like yin and yang, or a universe that allows for free will or chance between outcomes. Without a differential universe, duality doesn't exist and the only universe that could exist would be one that is completely deterministic where cause and effect will create a chain of events that can only end up with a single outcome till the end of time.

  15. You could show in the current formalism how a "differential" hidden variable mattered. Or explain what that even means. Because the form of the hidden variable is not an issue with the Bell explanations with which I am familiar.

    I've made no secret that I am not a physicist or mathematician, so formalism aside, I will try instead try to explain it logically.

     

    In order to do this we must imagine a hypothetical differential universe that consists of dual realities that are equal and opposite and locked in entanglement.

     

    Therefore, in the example with the EPR entanglement paradox, there actually exists two realities with two versions of Alice and two versions of Bob.

     

    In the first reality:

    Alice measures the X-spin with 100% certain results of + and Bob measures the Z-spin with non-determinate results 50:50 + or -

     

    In the second reality:

    Bob measures the X-Spin with 100% certain results of - and Alice measures the Z-spin with non-determinate results of 50:50 - or +

     

    Therefore the information does not actually travel any distance and only travels between alternate realities which occupy the same space in alternate timelines.

  16. We have a speculations section.

    Ok, but the issue is, I could only provide the logical model, the formalism would require too long a learning curve on my part, though mostly it would only require logical states. If your still interested in seeing the model, I could spend some time cleaning up a more presentable synopsis, for the QM portion.

     

    BTW, there's a gravity loop portion in addition to this that I'm still trying to muddle my way though.

     

    Edit to add: the model does resolve entanglement paradox, but it also solves or begins to explain most of the duality issues as well.

  17. Actually, what they were wrong about was their conclusion that quantum theory must be incomplete because there is a correlation between non-local measurements. Bell's inequality shows that no theory with local hidden variables can produce the same results as quantum theory (and, therefore, reality).

    Local hidden variables don't necessarily require inequality as Bell views it. What if the hidden variables were differential instead?

     

    For example, consider a differential signal pair. Differential signals are equals and opposites, or put another way, they are identical signals that are phase shifted by 180⁰.

  18. Okay, I think I found the source of your equation, it is this site.http://clinfield.com/2012/07/how-to-convert-centrifuge-rpm-to-rcf-or-g-force/Now here's whee you went wrong:First off, in this equation r is measured in millimeters not meters.Secondly, as you said, it uses RPM for the angular velocity.So to convert to a radius in measured in meters you have to multiply the right hand of the equation by 1000 (1000 millimeters to the meter)which gives:g-force = 0.001118r(RPM)²Now if you want to use RPS rather than RPM you have to note that R/min = R/60sec, so to convert from measuring in RPM to RPS you have to do this:g-force = 0.001118r(RPSx60)²60²=3600 so this converts tog-force = 4.0248r(RPS)²

    Now that makes sense, thanks for the help!

  19. emails are not instantaneous.

    Encryption Key example:

    The useful, instantaneous information at a distance I was speaking of in that example was the encryption key they shared and know one else could have any knowledge of, including the man in the middle. Pun intended.

     

    However, an even better example is the one given in the EPR paradox which stands for the last initials of it's authors: Albert Einstein, Boris Podolsky, and Nathan Rosen.

     

    EPR Paradox summary:

    It was based upon a mental model of an electron/positron entangled pair that involved the Heisenberg Uncertainty Principle.

     

    The gist of it was:

    given two properties of X-spin and Z-spin Uncertainty guarantees we cannot know the state of both properties at the same time even if separated by entangled particles.

     

    Therefore if Alice received an electron and spontaneously decided to measure either the X-spin or the Z-spin of the electron, than at any point after that:

     

    Bob attempted to measure, by chance, the opposite property from Alice (X/Z, Z/X) of the positron his result would always be inconclusive with a 50:50 probability for either + or -.

     

    Or...

     

    If Bob, by chance, chose to measure the same property as Alice (X/X or Z/Z), Than he would detect its state of + or - with 100% probability.

     

    Therefore Bob could always deduce which measurement (X or Z) Alice chose to make instantly after she made it despite the distance that separated them.

     

    The Authors of the article then acknowledged that one of two things must be true:

     

    *Alice violated the speed of light by instantly communicating which measurement she just made to Bob over a distance.

     

    or...

     

    *Quantum Mechanics as not a complete theory and some missing portion of the theory must involve properties within each particle that are hidden from our domain.

     

    *Note: I paraphrased those conclusions for clarity sake. You can Wikipedia the original quotes under "EPR Paradox"

     

    As to the question of the usefulness of a single bit of information a couple of famous examples come to mind.

     

    Paul Revere communicating the approach of the British at the beginning of the US Revolution by shining lamps from a light house "one if by land, two if by sea" as the song goes.

     

    If your not a US citizen than another famous single bit communication example would be the grey or black (or is it white or black?) smoke that signals that cardinals [un]successful voting in of a new pope.

     

    Note:

    While this is only a mental model, according to the Wikipedia article, experiments had later verified the mental experiment as valid.

  20. Even if light had not been constant would space and time not have still been linked together? Does not any measurement presuppose a time taken to make it?

    I'm not sure this question makes sense.

     

    If any theory is in fact true speculating what would happen if it wasn't true means the universe doesn't exist.

     

    However, we may have doubts about a theories correctness and speculate what would happen if such and such was not true when looking for an alternative theory that may make more sense. If that is the way you intended the question than I don't know, but what Mordred said is probably a pretty good bet before I've had a chance to fully digest his response.

     

    If light did not exist as a method of communication then would we not rely upon some other method?

    (aside from sound) we mostly communicate using signal waves in electromagnetic fields through substrate materials, in which light plays only one role, i.e. Fiber optics, but there are plenty of other examples of communication that does not involve light, such as any of the Ethernet protocols or wireless communications. Though they all still propagate at the speed of light and are fundamentally based on Maxwells equations.
  21. The way that I understand it, which I admit is still a bit fuzzy in spots is that special relativity linked space and time together by virtue of the speed of light being constant and the fact that nothing can go faster than the speed of light.

     

    The most obvious example of this is that when we look at any celestial body that is n light years away, we are looking at it as it existed n years in the past because that's how long the light took to arrive here.

     

    Other examples that they are linked has to do with the relativistic effects when traveling near the speed of light, e.g. Time dilation and length contraction, seem to go hand in hand.

  22. Using angular velocity the formula is

    [math]A_c = \omega^2 r[/math]

    where [math]\omega[/math] is the angular velocity measured in radians/sec

    solving for [math]\omega[/math] gives:

    [math]\omega = \sqrt{\frac{A_c}{r}}[/math]

    v is then just

    [math]v = \omega r[/math]

    Actually the formula used (revolutions/min)² which I changed to (Revolutions/sec)² to fit the units for g of m/s².

     

    I figured the distance traveled per revolution is just the circumference or 2πr

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.