Jump to content

T. McGrath

Senior Members
  • Posts

    73
  • Joined

  • Last visited

Everything posted by T. McGrath

  1. This is another case where we discovered the asteroid, then lost it for several years, only to find it again just a week ago. Which would seem to indicate that just because NASA may have discovered 98% of the NEOs larger than 1 km, it does not mean that they know where those NEOs are now. Of the 17,785 NEOs (as of March 1, 2018) that NASA has discovered, I would be very interested in knowing the percentage of NEOs that NASA has lost since their discovery.
  2. A very interesting read. Thanks for posting it. I come up with three different mass estimates for Sagittarius A* (Sgr A*): 4.1 ± 0.6 × 106M☉ from Measuring Distance and Properties of the Milky Way's Central Supermassive Black Hole with Stellar Orbits - The Astrophysical Journal, Volume 689, Number 2, 2008; , from Monitoring Stellar Orbits Around the Massive Black Hole in the Galactic Center - The Astrophysical Journal, Volume 692, Number 2, 2009; and (4.02 ± 0.16 ± 0.04) × 106M ⊙ from An Improved Distance and Mass Estimate for Sgr A* from a Multistar Orbit Analysis - The Astrophysical Journal, Volume 830, Number 1, 2016. I am curious as to how much of those mass estimates include those "nearby" orbiting stellar mass black holes.
  3. It has been suggested that most planets larger than 1.6 Earth radii (> 10,193.6 km) are not rocky. Which would really complicate things if you were a species attempting to leave your planet. Additionally, with regard to exoplanets smaller than 4 Earth radii they found a close relationship between mass and radius. Most 1.6 Earth-Radius Planets are not Rocky - The Astrophysical Journal, Volume 801, Number 1, March 2, 2015
  4. That is good news for the current generation. At least we will be spared being further irradiated by a weakening magnetic field (assuming a Carrington Event is not in our immediate future). Magnetic north is continuing to migrate across northern Canada towards Siberia at a rate of between 55 and 60 km per year, but nobody uses compasses any more. Compasses are so last century technology. With GPS who cares where magnetic north is located or where it goes?
  5. The number of trans-Neptunian carbonaceous asteroids is considerably smaller. The vast majority of trans-Neptunian objects formed in the Kuiper Belt, or beyond, and tend to be icy bodies. It is a rare thing to find an asteroid that formed inside the frost line orbiting beyond the planet of Neptune.
  6. I got the size of the asteroid from the paper published in Astrophysical Journal Letters, and its estimated size is 265.2 > 291.1 < 311.4 km. 2004 EW95: A Phyllosilicate-bearing Carbonaceous Asteroid in the Kuiper Belt - Astrophysical Journal Letters, Volume 855, Number 2, March 15, 2018
  7. This is not a small asteroid either. Although the article does not mention the size of the asteroid, the paper published in Astrophysical Journal Letters estimates the diameter of the asteroid to be 265.2 > 291.1 < 311.4 km. Free Preprint: https://arxiv.org/abs/1801.10163
  8. As I stated in the OP, with sufficient data we can distinguish between sub- and super-Chandrasekhar Type Ia SN. For example, the rate at which the ejecta travels for all sub-Chandrasekhar Type Ia SN (a.k.a. Type Iax SN since 2013) is less than 10,000 km/s, and the super-Chandrasekhar Type Ia SN all have high concentrations of nickel in their spectra. In the case of SN 2007if, more than a solar mass of nickel was observed in its spectra, which is what prompted their high estimate of the progenitor's mass. There are additional indicators that allow us to separate the sub- and super-Chandrasekhar Type Ia SN from each other, but it does require that additional data. We can no longer just assume it is a Type Ia SN merely because of its light curve. More data is required before we can have any certainty about its absolute magnitude. If between 18% and 48% of the Type Ia SN prior to 2013 were misclassified and should really be Type Iax SN, or sub-Chandrasekhar Type Ia SN with an absolute magnitude range between -14.2 < MB < -18.9, then we do need to look at the SN data that was collected to determine whether the lambda-CDM model needs correcting. A great many assumptions about the absolute magnitude has been made in the past regarding SN (particularly at z > 1) based solely upon its light curve. We need the data that demonstrates a Type Ia SN is not a sub- or super-Chandrasekhar Type Ia SN, otherwise it calls into question the calculated distances. Source: Type Iax Supernovae: A New Class of Stellar Explosion - The Astrophysical Journal, Volume 767, Number 1, March 2013 (free issue)
  9. It can only be a standard candle when the absolute magnitude is known and does not change with each event. During the 1990s, and earlier, we erroneously thought all Type Ia SN had an absolute magnitude of MB -19.46 because of Chandrasekhar's work in 1930. We know today that Type Ia SN have an absolute magnitude range of -14.2 < MB < -20. Which means that they are not the standard candle everyone assumed they were 20 years ago. Edwin Hubble used Cepheid variable stars as his standard candle in 1927 when he calculated the age of the universe was only two billion years old. Cepheid variables only work as a standard candle if you have the right classical Cepheid variable. If you happen upon a Type II Cepheid variable, or an anomalous Cepheid variable, or a double-mode Cepheid variable, or a RR Lyrae variable then they are not the standard candle you desire them to be. Expanding the Chandrasekhar Limit has other implications as well. A 2.8 M⊙ white dwarf would be more massive than the most massive neutron star yet observed. What was once thought to only be neutron stars with masses greater than 1.44 M⊙, may actually be rapidly rotating and/or highly magnetic white dwarfs instead. The only accurate means of measuring cosmological distances is parallax. There is no standard candle in astronomy.
  10. For the last 88 years we have used Subrahmanyan Chandrasekhar's calculations to determine the maximum mass of a white dwarf. As a result of that mass limit a peak brightness was derived and the Standard Candle was born. However, those calculations were made based upon certain assumptions, namely that the white dwarf was not rotating and had no magnetic field. More recent discoveries (specifically SN 2003fg, SN 2006gz, SN 2007if, and SN2009d) have demonstrated that those assumptions made in 1930 may not hold true in some cases. The white dwarf mass prior to SN 2007if's deflagration, for example, was estimated to be 2.4 M☉. In 2013 a paper that was published by Upasana Das and Banibrata Mukhopadhyay, from the Indian Institute of Science, proposes a new white dwarf limit with the assumption that the white dwarf is highly magnetized. Arguing that the outward pressure of the magnetic field of the white dwarf would partially counteract the gravitational pull inward, thus allowing the white dwarf to accumulate additional mass, beyond Chandrasekhar's Limit, before deflagration. As a result of the high nickel content in the spectra of these superluminous Type Ia SN, they are suggesting that the progenitors were generating a very strong magnetic field and that allowed them to accumulate additional mass. Furthermore, they also suggest that magnetars, which are currently presumed to be neutron stars with strong magnetic fields, be re-examined as potentially highly magnetic white dwarfs. They calculate the new limit for highly magnetic white dwarfs to be 2.58M☉. Banibrata Mukhopadhyay more recently published a more comprehensive paper, taking into account various rotational speeds as well as varying levels of magnetic fields, and now estimates the white dwarf maximum limit to be between 2.3 and 2.8 M☉. This would seem to imply that our Standard Candle for measuring distance is not so "standard" after all. In many cases we can use other data about the SN to help determine its absolute magnitude, such as the rate and composition of its ejecta, but absent that data we can no longer assume that Type Ia SN will always result in an absolute magnitude of MB -19.46 at peak brightness simply based upon its light curve. Sources: New Mass Limit of White Dwarfs - International Journal of Modern Physics D, Volume 22, Issue 12, October 2013 (free preprint) Nearby Supernova Factory Observation of SN 2007if: First Total Mass Measurement of a Super-Chandrasekhar-Mass Progenitor - The Astrophyisical Journal, Volume 713, Number 2, March 2010 (free issue) Significantly Super-Chandrasekhar Limiting Mass White Dwarfs as Progenitors for Peculiar Over-Luminous Type Ia Supernovae - arXiv : 1509.09008, September 2015 The Evolution and Fate of Super-Chandrasekhar Mass White Dwarf Merger Remnants - Monthly Notices of the Royal Astronomical Society, Volume 463, Issue 4, December 2016 (free preprint) The Type Ia Supernova SNLS-03D3bb from a Super-Chandrasekhar-Mass White Dwarf Star - Lawrence Berkeley National Laboratory, April 2008 (open access)
  11. Actually, there was reason to think otherwise, and it was Einstein who provided that reason. When Hubble demonstrated that the universe was expanding Lemaître used Einstein's equations to demonstrate that the universe had a beginning. Einstein's own equations show that the universe must either be expanding or contracting, it cannot be static. Einstein did not want to accept what his own equations were telling him, which is why he began work on his cosmological constant. When Lemaître and Einstein met for the first time in 1927 Einstein said to Lemaître, "Your calculations are correct, but your physics are abominable." However, by 1931 Einstein gave up on his cosmological constant and by 1933 commented that Lemaître's theory was “the most beautiful and satisfactory explanation of creation to which I have ever listened.”
  12. Photons existed before electrons combined with nuclei to form neutral atoms. Neutral atoms started to form when the universe cooled to 4,000°K. When an electron in the high energy level is converted into a lower energy-level electron a photon is created and released. When an electron in the lower energy level is excited into a higher energy-level a photon is absorbed and destroyed. The universe was not "observable" in the sense that we can see anything, until neutral atoms were created and the universe became transparent for the first time during the Recombination Epoch. At z > 1,100 all matter is in excess of 4,000°K, ionized, optically thick, and strongly coupled to photons. In other words, the photons are being scattered all the time. The Universe is completely opaque. Photons are created and released when an electron in the high energy level is converted into a lower energy-level electron. Even before neutral atoms existed, when the universe was still an extremely hot plasma, there were still free electrons and this is where the first photons came from. As the universe cools, neutral atoms begin to form and electrons are bound to nuclei for the first time, making the universe transparent. While the universe was cool enough for photons to be created from free electrons ~10 seconds after the Big Bang, the universe did not cool enough to become transparent until ~380,000 years after the Big Bang, when neutral atoms were able to form. I'm familiar with Hoyle's work on the creation of matter. It was the primary flaw in his Steady State theory. Since we already knew the universe was expanding, thanks to Edwin Hubble, the only way the universe could be eternal (as Hoyle believed) was by continuously creating matter from nothing somewhere in the universe. The paper you referenced was that attempt to explain how matter can be created from nothing. You will also note that the paper was published the year before the cosmic microwave background radiation was discovered which vindicated Georges Lemaître and proved Fred Hoyle wrong. Fred Hoyle died in 2001 never buying into the Big Bang theory. He simply could not grasp that the universe had a beginning. I agree. Hoyle was a brilliant astronomer. He just got this one theory wrong. It happens to the best of them. Even Einstein blew it with his cosmological constant, because he too desired an eternal universe. It is ironic that the term "Big Bang," which is used by everyone now, was created by someone who did not believe in the theory.
  13. Photons did not exist until ~10 seconds after the Big Bang, after the universe cooled down from a billion degrees Kelvin to ~4,000°K, but the temperature is still too high for the binding of electrons to nuclei . The Photon Epoch begins just after the leptons and anti-leptons annihilate each other during the Lepton Epoch, and will last until ~380,000 years after the Big Bang, or red shift z = 1,100. It is during the Recombination Epoch, which lasts ~100,000 years, and occurs immediately after the Photon Epoch when the universe starts becoming transparent to photons. By 380,000 years after the Big Bang the temperature of the universe cooled enough to allow nuclei and electrons to combine and create neutral atoms. Which meant that photons no longer frequently interacted with matter, the universe became transparent. This Recombination Epoch is something we have been able to image, because it is also known as the cosmic microwave background radiation. Therefore, if you were to take a ride back in time on the very first photon created, your journey would end ~10 seconds after the universe began. On an interesting side note, it was the 1964 discovery of the cosmic background radiation by Robert Wilson and Arno Penzias that convinced the overwhelming majority of astronomers that the "Big Bang" really did exist. For 14 years the astronomer Fred Hoyle had argued, convincingly, that the universe was in a Static State. It was Fred Hoyle who coined the term "Big Bang." He meant it as derisive term, but the name stuck. Fred Hoyle died never believing that the universe had a beginning.
  14. Deep Learning is a programming methodology. It isn't even a program itself. Wikipedia defines Deep Learning as "part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms." Hence, Deep Learning can do none of the things I listed, and it certainly can't pass a Turing test. There are certainly a lot of people out there claiming they have AI. The reality is that AI does not yet exist. Everyone who claims they have created AI, hasn't. "Artificial Intelligence" is one of the most misused terms in all of computer science. Teaching a computer to learn is not intelligence. Programming a computer to do one thing, no matter how well it does it, is not artificial intelligence. I'm afraid that when artificial intelligence does eventually get invented that we won't recognize it because we will have called everything else AI instead.
  15. The stupid part is when NASA says they have a list of confirmed exoplanets, when in reality they have a list of confirmed exoplanets that have also been published in a peer-reviewed journal. But they just tell people that the exoplanets have been confirmed, and nothing else. That is also the deceptive part. Deliberately misrepresenting their subset of exoplanets by excluding the additional criteria that they impose. SCR 1845 is not in the Paris database either. Apparently NASA didn't get the message that we were interested in tracking exoplanets in our exoplanet database, and not brown dwarfs. Any planetary body that is massive enough to begin deuterium fusion has ceased to be a planet and has become a brown dwarf. This occurs somewhere between 13 and 14 Jupiter masses, not 30. I have far less confidence in NASA's data now than I did when I started this thread. Apparently NASA is off in never-never-land doing their own thing that doesn't match what anyone else on the planet is doing. I wish I could say I was surprised.
  16. The layer of oversight is the confirmation. There are still more than 3,000 potential exoplanets waiting to be confirmed. However, once they are confirmed then they should go into a database for confirmed exoplanets. Now if NASA wants to create an additional criteria, like it must be confirmed and published, then they should create a database and specify it as such. Instead they come up with a list of exoplanets and say they have been confirmed and nothing else. Why are they being deceptive with their data? Are they afraid that if it hasn't been published the exoplanet might not exist, even though it has already been confirmed? Like I said, government stupidity. It also makes me wonder if NASA has a political motivation for keeping certain exoplanets off their list.
  17. Then NASA is the one who is in error. The Paris Observatory listing is the actual number of exoplanets that have been confirmed. NASA's listing are the number of exoplanets that have been confirmed AND published in a scientific journal. Even if the exoplanet has been confirmed, if the discoverer never publishes their findings in a scientific journal NASA will not count it as a confirmed exoplanet. Which is just stupid, but not unexpected from a government agency. Government agencies excel at stupidity.
  18. Except that isn't true. They are listing confirmed exoplanets, just as NASA (and to be fair, it is really JPL at CalTech who is maintaining the database on behalf of NASA) does. Which means that the exoplanets have already been discovered, announced/published, and confirmed. There are still several thousand exoplanets waiting to be confirmed, but these 3,726 exoplanets the Paris Observatory says have already been confirmed. Which is 154 more than NASA says have been confirmed.
  19. The number of confirmed exoplanets appears to depend on who you reference: According to NASA's Exoplanet Archive (which is where I suspect the graph the OP presented originates) there were 3,572 confirmed exoplanets as of December 21, 2017. With 592 multi-planetary systems. While the Extrasolar Planet Enclyclopedia, maintained by the Observatoire de Paris, shows 3,726 confirmed exoplanets in 2,792 planetary systems with 622 of those systems having multiple confirmed exoplanets. They include the year of the discovery, but not by whom. Then there is the Open Exoplanet Catalogue which shows 3,504 confirmed exoplanets as of November 28, 2017, being maintained by MIT. They also do not list who made the discovery. Finally we have the Exoplanet.Org website, which is supposedly being maintained by Berkley, Penn. State, the National Science Foundation, and NASA. They show 2,950 confirmed exoplanets. I could not find a date when this website was last updated, but it would appear to have been awhile. In this particular case, I would probably lean toward NASA's figures, but I would want to know why there is a discrepancy of 154 confirmed exoplanets in 30 multi-planetary systems. NASA's database includes more than just Kepler's discoveries. What exoplanets has the Paris Observatory confirmed that we don't know about yet? Wait until the James Webb Space Telescope gets launched in 2019. We may be able to directly image an exoplanet. That will certainly be a milestone in the annals of astronomy.
  20. You can't answer the OP's question without first defining AI. So far this thread has demonstrated a wide variety of definitions.
  21. With the advent of quantum computers, it may yet be achievable within the next generation or two. I certainly have not ruled out the possibility of developing artificial intelligence. I just don't think we are anywhere close ... yet. We are still developing software that is capable of only doing one thing, and that is not artificial intelligence. I don't care if you want to call it a "neural network," or "heuristic algorithms", it is still only an Expert System until it can do more than one thing without being reprogrammed.
  22. The ability to learn is not an indication of intelligence, just clever programming. Intelligence begins when you apply what you have learned, and more than to just one thing. When you can show me a program that can play Chess/Go, drive me to work in congested traffic, and diagnose any medical problems I might have - without having to reprogram - then you will have achieved artificial intelligence, but doing just one thing (no matter how well) doesn't cut it. I'm saying that developing an application that does just one thing, no matter how well it does it, is not artificial intelligence. It is an Expert System. MIT has been trying to beat the Turing test since the 1960s, and failing. So I'm not surprised to see in their desperation that they made up their own test, which they could pass, and then misassociated it with Alan Turing. I agree with you that some refinement of the Turing test could be in order, but the rules/conditions of the test would have to be established first. Not after-the-fact, as is prone to happen with the media. The goal is not to fool the observer, but rather to make it so the observer cannot distinguish between human intelligence and artificial intelligence. The problem is that there is a subjective component to this test.
  23. We have people claiming that we have artificial intelligence even though they had to write the program that told their computer all the rules of the game. Isn't it amazing how this program was able to play a game - once we programmed all the rules? I mean seriously? Like I said at the very beginning, we need a definition for artificial intelligence, because this sure isn't it. Personally, I'm going to stick with the man who invented the field of artificial intelligence in 1956. If the program cannot pass the Turing test, then it is not artificially intelligent. No matter how many games it is able to play. Thus far nothing we have developed has come close to passing the Turing test.
  24. What you call AI I call an Expert System. A program that can only play the game Go, no matter how good, is not demonstrating any intelligence. It is simply following the instructions that was provided by its human programmer. It is the programmer who is demonstrating the intelligence here, not the program. There is nothing we have developed today that comes even remotely close to artificial intelligence. The "Sophia" bot is nothing more than an upgraded version of ELIZA, which was an early natural language processing program created between 1964 to 1966 at the MIT. It was good at mimicking conversation, but it could never pass the Turing test.
  25. I think the first thing that is needed is a clear definition of Artificial Intelligence. There is a difference between AI and Expert Systems. If it can only do one thing, then it is an Expert System and not AI. AI implies that it is capable of doing many things, solving a wide variety of problems from traffic congestion to medical diagnoses - To do more than merely what it was heuristically programmed to do. To often the term "Artificial Intelligence" is misused, particularly by the media. 99% of the time they are actually referring to an Expert System. When we have end-users who spend an hour searching for the "Any" key, I would have to say that we have arrived. Programs are already smarter than some people, and there is nothing artificial about it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.