Jump to content

Sin

Members
  • Posts

    17
  • Joined

  • Last visited

Posts posted by Sin

  1. Not only do lipid bubbles form spontaneously in the proper environment, the primitive genetic material in the theories of abiogenesis could theoretically self-replicate without the aid of enzymes through nucleic acid base pairing (Adenine with Thyamine, Guanine with Cytosine).

     

    Alright. I had to look up a few of those words, :doh: but i believe i got the point of your statement.

     

    If i find another suggestive evolution topic similar to this i will post it in here instead of make a new topic.

     

    EDIT: iNow,thank you also. Dang, this probably the 6-8the topic you have taught me something new on, I'll definately be checking out those videos soon!

  2. Alright, i've been pondering about this text i have noticed on a different website.

    What would be your response to this statement? I really do not understand what they are trying to put out.

    This...just doesn't sound right, I can't put my finger on it but i find something strange about it, thus, i decided to ask it here.

    For a side-note: I believe he is talking about The Big Bang Theory.

     

    just listen to this.

    i won't go through a detailed description of the likeliness of each part of the necessary parts of a cell forming,i will just tell the chances of the cell as a whole forming spontaneously.in mathematics(statistics),something that has 10^50 chances of succeeding before one chance of failure is known as a fact.and the opposite is when something has a 10^50 chances of failing before one chance of succeeding,this is known as a statistical impossibility.for a single cell to from spontaneously in the most ideal conditions is a little over 10^1billion chances of failure before 1 chance of succeeding.it is statistically trillions and trillions and triliions and trillions of times more likely for you to be dropping a metal ball and instead of gravity making it fall,it floats up in the air and flies away.considering the chances of the ball floating away:if you dropped the metal ball once a second, it would take a time much longer than the age of the universe just to drop the ball half the amount of times you would need to to see it float away.so please tell me how life can spontaneoulsy form.don't you think that there is an outside intelligence trillions and trillions of times that of ours that governed the creation or created our universe?

  3. I need to know some decent books/authors for the following topics:

     

    1. Astronomy and Cosmology Phsics

    2. Modern and Theoretical Physics

    3. Relativity

    4. And...why not, maybe one about Genetics.

     

    I would really appreciate some books, or some famous Science authors to look for. I prefer to read books than to connect various information across the internet.

     

    Make sure they are not advanced books, im new to Physics.

  4. It's pretty for sure that they have extrapolated something HUGE, which will have the intellectual community going nuts for weeks!

     

    I loved it.

     

    One other favorite quote:

    He starts telling me about how he just put some hot pockets in the microwave before he went to the bathroom. When he got back, the microwave was empty and THE HOT POCKETS WERE ALREADY ON THE COUNTER! Derek says he cant even chew because of the fear inside.
  5. Oxford, England—The Global Catastrophic Risks conference sponsored by the Future of Humanity Institute concluded on Sunday. Participants were treated to a series of presentations describing how billions of people could potentially bite the dust over the next century. The possible megadeath tolls of both natural and biotech pandemics were considered. The chances that asteroids, comets, or gamma ray bursts from a nearby supernova could wipe out humanity were calculated. The old neo-Malthusian threats of overpopulation, resource depletion, and famine were trotted out. But these risks to future human well-being paled in comparison to one main menace—malicious human ingenuity.

     

    Human ingenuity forged the still massive arsenals of nuclear weapons held by the United States and Russia. And as the conference participants made an argument that human ingenuity is on track to craft nanotech fabricators that can make essentially any product, including weapons of mass destruction, at essentially no cost, not to mention a self-improving artificial intelligence possessing god-like powers to pursue its own goals.

     

    First, let's consider the nuclear threat. Joseph Cirincione of the Ploughshares Fund pointed out the good news that the world's nuclear arsenals have been cut in half-down from 65,000 to 26,000 since the height of the Cold War. However, the U.S. retains 10,685 nuclear bombs and Russia is estimated to have around 14,000. Of those, 4,275 in the U.S. and 5,192 in Russia are active. Both countries maintain 2,000 weapons on hair-trigger alert, ready for launching in 15 minutes or so. Cirincione offered a couple of scary scenarios, including one in which there is an unauthorized launch of all 12 missiles from a Russian submarine containing 48 warheads with about 5 megatons total destructive power. Such an attack would kill 7 million Americans immediately. A retaliatory American attack aimed at several hundred Russian military assets would kill between 8 and 12 million Russians.

     

    With regard to the possibility of an accidental nuclear war, Cirincione pointed to the near miss that occurred in 1995 when Norway launched a weather satellite and Russian military officials mistook it as a submarine launched ballistic missile aimed at producing an electro-magnetic pulse to disable a Russian military response. Russian nuclear defense officials opened the Russian "football" in front of President Boris Yeltsin, urging him to order an immediate strike against the West. Fortunately, Yeltsin held off, arguing that it must be a mistake.

     

    A global nuclear war scenario in which most of both Russian and American arsenals were fired off would result in 105 to 230 million immediate American deaths and 28 to 56 million immediate Russian deaths. One of the effects of such an attack would be a rapid cooling of global temperatures as sunlight was blocked by dust and smoke. Cirincione argued that even a nuclear war limited just to bitter enemies India and Pakistan could produce enough dust and smoke to lower global temperatures by one half to two degrees Celsius, plunging the world back to the Little Ice Age.

     

    The good news is that Cirincione sees an opening for negotiations to totally eliminate nuclear weapons. He pointed to an initiative by the "Four Horsemen of the Un-Apocalypse"; that is, by former Secretaries of State Henry Kissinger and George Schultz, former Sen. Sam Nunn (D-Ga.), and former Secretary of Defense William Perry that aim to eliminate nuclear weapons completely. In fact, both of the presumptive major party presidential candidates, Sen. John McCain (R-Ariz.) and Sen. Barack Obama (D-Ill.), have explicitly endorsed the idea of global nuclear disarmament. Cirincione argued that a commitment by the declared nuclear powers would have the effect of persuading countries like Iran that they did not need to become nuclear powers themselves.

     

    Cirincione danced around the question of what to about Iran's pursuit of nuclear weapons, pointing out that its nuclear facilities are hardened, dispersed, and defended. Cirincione asserted that the U.S. has 5-day and 10-day plans for taking out Iran's nuclear facilities, but he noted that such plans don't end the matter. Iran has moves too, including trying to block oil shipments through the Straits of Hormuz, revving up terrorist attacks in Iraq, and even aiding terrorist attacks in the U.S. Cirincione claimed that that the Iranians are still five to ten years away from making a nuclear bomb. On a side note, Cirincione admitted that he initially did not believe that the Syrians had constructed a nuclear weapons facility, but is now convinced that they did. The Syrians hid it away in a desert gully, disguising it as an ancient Byzantine building.

     

    Terrorism expert Gary Ackerman from the University of Maryland and William Potter from the Monterey Institute of International Studies evaluated the risks from two types of nuclear terrorism—the theft of nuclear material and the construction of a crude bomb and the theft of an intact nuclear weapon. They set aside two lower consequence attacks: the dispersal of radiological material by means of a conventional explosion and sabotage of nuclear facilities. Could non-state actors, a.k.a., a terrorist group, actually build a nuclear bomb? Potter cited an article by Peter Zimmerman in which he estimated that a team of 19 terrorists (the same number that pulled off the September 11 atrocities) could build such a bomb for around $6 million. Their most challenging task would be to acquire 40 kilograms of highly enriched uranium (HEU). There are 1700 tons of HEU in the world, including 50 tons stored at civilian sites. Potter acknowledged that intact weapons are probably more secure than fissile material.

     

    Ackerman noted that only a small subset of terrorists has the motivation to use nuclear terrorism. "So far as we know only Jihadists want these weapons," said Ackerman. Specifically, Al Qaeda has made ten different efforts to get hold of fissile material. Ackerman told me that Al Qaeda had been defrauded several times by would-be vendors of nuclear materials. Just before the September 11 atrocities, two Pakistani nuclear experts visited Osama bin Laden in Afghanistan, apparently to advise Al Qaeda on nuclear matters. One possibility is that if Pakistan becomes more unstable intact weapons could fall into terrorist hands. Still, the good news is that intercepted fissile material smugglers have actually been carrying very small amounts. Less reassuringly, Potter did note that prison sentences for smugglers dealing in weapons grade nuclear material have been less than those meted out for drunk driving.

     

    One cautionary case: Two groups invaded and seized the control room of the Pelindaba nuclear facility in South Africa in November, 2007. They were briefly arrested and then released without further consequence. Both Ackerman and Potter agreed that it is in no state's interest to supply terrorists with nuclear bombs or fissile material. It could be easily traced back to them and they would suffer the consequences. Ackerman cited one expert estimate that there is a 50 percent chance of a nuclear terrorist attack in the next ten years.

     

    While nuclear war and nuclear terrorism would be catastrophic, the presenters acknowledged that neither constituted existential risks; that is, a risk that they could cause the extinction of humanity. But the next two risks, self-improving artificial intelligence and nanotechnology, would.

     

    The artificial intelligence explosion?

     

    Singularity Institute for Artificial Intelligence research fellow Eliezer Yudkowsky began his presentation with a diagram of the space of possible minds. Among the vast space of possible minds, a small dot represented human minds. His point is that two artificial intelligences (AIs) could be far more different from one another than we are from chimpanzees. Yudkowsky then described the relatively slow processing speeds of human brains, the difficulty in reprogramming ourselves, and other limitations. An AI could run 1 million times faster, meaning that it could get a year's worth of thinking done in 31 seconds. An "intelligence explosion" would result because an AI would have access to its source code and could rapidly modify and optimize itself. It would be hard to make an AI that didn't want to improve itself in order to better achieve its goals.

     

    Can an intelligence explosion be avoided? No. A unique feature of AI is that it can be a "global catastrophic opportunity." Success in creating a friendly AI would give humanity access to vast intelligence that could be used to mitigate other risks. But picking a friendly AI out of the space of all possible minds is a hard and unsolved problem. According to Yudkowsky, the unique features of a superintelligent AI as a global catastrophic risk are: There is no final battle, or an unfriendly AI just kills off humanity. And there is nowhere to hide because the AI can find you wherever you are. There is no learning curve since we get only one chance to produce a friendly AI. But will it happen? Yudkowsky pointed out that there is no way to control the proliferation of "raw materials," e.g., computers, so the creation of an AI is essentially inevitable. In fact, Yudkowsky believes that current computers are sufficient to instantiate an AI, but researchers just don't know how to do it yet.

     

    What can we do? "You cannot throw money or regulations at this problem for an easy solution," insisted Yudkowsky. His chief (and somewhat self-serving) recommendation is to support a lot of mathematical research on how to create a friendly AI. Of course, former Sun Microsystems chief scientist Bill Joy proposed another solution: relinquishment. That is, humanity has to agree to some kind of compact to never try to build an AI. "Success mitigates lots of risks," said Yudkowsky. "Failure kills you immediately." As a side note, bioethicist James Hughes, head of the Institute for Ethics and Emerging Technologies, mused about how much longer it would be before we would see Sarah Connor Brigades gunning down AI researchers to prevent the Terminator future. (Note to self: perhaps reconsider covering future Singularity Institute conferences.)

     

    The menace of molecular manufacturing?

     

    Next up was Michael Treder and Chris Phoenix from the Center for Responsible Nanotechnology. They cannily opened with a series of quotations claiming that science will never be able to solve this or that problem. Two of my favorites were: "Pasteur's theory of germs is a ridiculous fiction" by Pierre Pachet in 1872, and "Space travel is utter bilge," by Astronomer Royal Richard Woolley in 1956. Of course, the point is that arguments that molecular manufacturing is impossible are likely to suffer the same predictive failures. Their vision of molecular manufacturing involves using trillions of very small machines to make something larger. They envision desktop nanofactories into which people feed simple raw inputs and get out nearly any product they desire. The proliferation of such nanofactories would end scarcity forever. "We can't expect to have only positive outcomes without mitigating negative outcomes," cautioned Treder.

     

    What kind of negative outcomes? Nanofactories could produce not only hugely beneficial products such as water filters, solar cells, and houses, but also weapons of any sort. Such nanofabricated weapons would be vastly more powerful than today's. Since these weapons are so powerful, there is a strong incentive for a first strike. In addition, an age of nanotech abundance would eliminate the majority of jobs, possibly leading to massive social disruptions. Social disruption creates the opportunity for a charismatic personality to take hold. "Nanotechnology could lead to some form of world dictatorship," said Treder. "There is a global catastrophic risk that we could all be enslaved."

     

    On the other hand, individuals with access to nanofactories could wield great destructive power. Phoenix and Treder's chief advice is more research into how to handle nanotech when it becomes a reality in the next couple of decades. In particular, Phoenix thinks that it's urgent to study whether offense or defense would be the best response. To Phoenix, offense looks a lot easier—there are a lot more ways to destroy things than to defend them. If that's true, we should narrow our future policy options.

     

    This concluion left me musing on British historian Arnold Toynbee's observation: "The human race's prospects of survival were considerably better when we were defenseless against tigers than they are today when we have become defenseless against ourselves." I don't think that's right, but it's worth thinking about.

     

    SOURCED:http://www.reason.com/news/show/127676.html

     

    Are all of these reasons probable?

    Which of these choices would be more of a threat?

    And of course, could any other threats be added to this list that could cause an end to Humanity?

  6. http://en.wikipedia.org/wiki/Science

    http://en.wikipedia.org/wiki/Fields_of_science

     

    For a general overview of something you know little to nothing about wp is good.

     

    Astrology (You mean astrophysics, trust me never call it astrology to an astrophysicist...) ;)

     

    http://en.wikipedia.org/wiki/Astrophysics

    http://hyperphysics.phy-astr.gsu.edu/hbase/astro/astcon.html#astcon

     

    Biology:

    http://en.wikipedia.org/wiki/Biology

     

    Nuclear physics (and particle physics you possibly mean):

    http://en.wikipedia.org/wiki/Nuclear_physics

    http://en.wikipedia.org/wiki/Particle_physics

    http://hyperphysics.phy-astr.gsu.edu/hbase/nuccon.html#c1

     

    These will give you a broad overview (hyperphysics varies from highly mathematical and indepth to washy and handwavey), if you've any questions post them here, read threads here if you've questions ask them. We WONT think any the worst of you for asking a question. :)

     

    |Edit:

     

    I also cannot emphasis just how broad, varied and scarily large the concept of science covers. I feel this is something that is often just glazed over in schools, science is quite literally everything!

     

    Thank you.

    +Rep

  7. As far as I'm concerned (being an engineer), science is about the knowledge itself, and engineering (which is also science, in a way) is about the what you can actually do with the knowledge.

     

    This is perhaps step one to decide: do you want to either discover new things, or do you prefer to apply discoveries of others in a new way? (Like in my field, chemistry: the chemists work in the laboratory to do research on new materials. Once they find something that is a good product, I step in, and start working on designing the factory).

     

    Most fields of science also have an applied version (the engineering side of the story).

     

     

    I would like to discover new things. Were should i go? So far skimming threw the forum i found interest in Astrology, Biology, and a tad bit of that Nuclear studies (unsure of name).

    Any good resources to point out for these subjects?

  8. Would it be possible to revert it back to an acceptable level, or would the process of 'fixation' just stop anymore destruction to the Enviornment?

     

    If it could be reverted back, would Humans need to revert it or would it naturally fix over a series of many years?

  9. Ever since i've got interested into online Forums i somehow ended up with Sin.

    Maybe from my various acts, or my agreesive nature, or my Anti-God symptoms (Yes, its a real sickness people can get now-a-days!), i ended up typing Sin into the "Username" box, and ever since than I've been declared Sin on each website i go too, no matter what.

    Even if the name is taken, im still Sin!

  10. Hmm, may be late, but still seems fun to try the test.

    Are you a student or a parent (think of this as child or adult)?

     

    Age: 17

     

    How long have you been driving?: About 2 years. (I Started driving with my parents at a younger age than normal)

     

    Gender: Male

     

    Race: Caucasian

     

    TYPE OF CAR YOU DRIVE: S10 LS Truck

     

    Year:1997

    Model:Chevy

     

    Do you text while driving (or waiting at the traffic light)?: No

     

    Do you talk on the phone while driving (or waiting at the traffic light)?: Yes

    If so, how often? (all the time, most of the time, sometimes, rarely) Sometimes

     

    Do you listen to music while driving (or waiting at the traffic light)?: Yes

    If so, how often? (all the time, most of the time, sometimes, rarely) All the time/Most of the time

     

    Do you eat while driving (or waiting at the traffic light)?: Yes

    If so, how often? (all the time, most of the time, sometimes, rarely) Sometimes

     

    Have you been involved in any accidents while you were driving? No

     

    While you are driving, how often do you talk to the other passenger in the car? (all the time, most of the time, sometimes, rarely) All the time/Most of the time

  11. I think the tree already addressed your question, but just to add to what he said...

     

    The question is really baseless. You have cousins, right? People who were born as children to your parents siblings? Well, how come there are no people in between you and your cousins (like "half-cousin/half-you")? Because, it's a stupid question. :)

     

    For the second question (why are there still apes), it's another somewhat baseless question. It's like asking "If Americans and Australians all mostly came from Europe, why are there still Europeans?" It's just completely irrelevant.

     

     

    Also, here's a good site with some FAQs that I think will help you to make the idea even more clear in your mind:

    http://www.pbs.org/wgbh/evolution/library/faq/cat03.html

     

     

    I really hope this helps, and I'm glad you've taken the time to ask. That shows that you have a genuine desire to understand, and I commend that. :)

     

    I want to thank you for those statements, i had wonderings on how to combat relatives who say things such as this also. I had to read over your analogies a couple times, but when i got them i fully understood.:doh:

  12. I would have to agree with Mokele on this issue, i can't see how any project such as this could be easier when alive or dead?

     

    And animal rights radicals..err 'acitivists' usually lie about everything possible without backing it up, much like PETA, with a lot of issues. Although their original agenda is very pure and humane, their way of achieving it is usually corrupt.

     

    As the "Does it hurt as much as it seems" question was proposed, I would say yes.

  13. SFN's where I first started learning all sorts of new and interesting things. Just hang around, read discussions, and ask questions (we like questions) when you're confused about something.

     

    I would like to stay, and probably will, but ill mainly just view everything, considering about half of the things i see here i have absolutely no idea what they are talking about, lol.

  14. I've been trying to find something decent that would help me grip more knowledge on science.

     

    I need extremely beginner lessons/information. Some of these words in the forum i have never seen in my life. Im in public schooling high school though...so i have an excuse! :)

     

    I'd really prefer books or webpages. Forums are alright, I will probably attempt to come back here in the future, but most likely if I stayed I would just embarras myself, heh.

     

    And one last heads up, i don't know any of the "sections" of science either, so i do not know which i would be more interested in, which is why i want a more open-sided information that can display all science.

     

    Thank you, in advance, if anyone can help.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.