Jump to content

RyanJ

Senior Members
  • Posts

    2250
  • Joined

  • Last visited

Posts posted by RyanJ

  1. If your father has a good understanding of chemistry as you say then why can he not provide you with some experiments that you can do? Anyone with a knowledge of chemistry beyond GCSE would know interesting (and more importantly safe) experiments that can be done at home.

  2. What about creating something less intelligent than us, and then making it more intelligent? Wouldn't that negate your reasoning for why you think it can't be done?

     

    If such a thing could be done then you are correct in that my reasoning there would fall apart.

     

    Ha ha, you naive optimist. Not everyone would agree with that, and even if they did they could be quite mistaken.

     

    I wouldn't call that being optimistic but realistic. People have a tendency to not trust anything new and although we would not understand something with greater intelligence than our own I tend to believe people would air on the side of caution and take things one step at a time to ensure nothing bad happens. Then again as a species we are rarely careful so it may not be that way.

     

    The big problem is that "wanting to help humans/a specific human" is very abstract. It would be very difficult to build that concept into a nascent AI right from the start. On the other hand, if an AI is made without that moral imperative, by the time it is intelligent enough to understand it, it would probably be too late to add. An example of this is the aforementioned blue brain project, where we don't necessarily even have a clue what concepts we are inputting.

     

    Again, I agree with what you are saying. There is also no guarantee that said AI would care or would interpret the "programming" in ways other than those intended, as was done in the movie "I, Robot" for example. That would be well within the realms of possibility (and a frightening one at that).

  3. Well, we can already make a real intelligence be more intelligent (stimulation, better nutrition, fancy new drugs).

     

    True but that's a different matter. We aren't talking about building a new intelligence from scratch with that one - we're just expanding upon what we already have. Creating is harder than expanding.

     

    What we can do is ensure that the AI "wants" to help us, right at the start, rather than "forcing" it to help us. If it "wants" to escape its position as subservient to humans, it probably will. And don't say it can be turned off or can't move; it could easily become a CEO of a robotics company, owning it's own computers and developing its own body.

     

    Very true. Although I'm pretty sure that no such AI would ever be made unless someone was sure that they could keep it under control.

  4. You are very correct bascule and that was down to a mistake in my writing, sorry about that.

     

    The issue I meant to point out was this: We are intelligent but where do you draw the line? Are there multiple forms of intelligence or even multiple levels of it? We don't really know.

     

    There is also no conclusive evidence that copying the brain into say a neural network will reproduce effects we see in it, such as intelligence. The brain is still very mysterious though I go admit that the idea is very interesting. It would have wide ranging applications if it were to succeed. Thanks for the link.

  5. Bombs and mustard gas. Those who frequent the irc channel already know.

     

    Well somehow I very much doubt that anyone is going to give out that kind of information anyway - at least I hope not. Thanks for the tip though.

     

    I should rephrase then - any safe kinds of experiments that you would wish to do.

  6. If you search through this forum you will find lots of interesting experiments that are safe. If you think that there are any that are outside your understanding then I wouldn't recommend them without supervision.

     

    Are there any specific types of experiment that you would like to try?

  7. Personally I think that there are two main issues here.

     

    Creation - Can we actually make an AI that's "more intelligent" than ourselves?

     

    I would say that with our current understanding the answer is, quite simply no. In order to make something with equal or greater intelligence it seems sensible that we must first be able to define what intelligence is and then replicate it. As it currently stands there is no conclusive test for intelligence let alone a description of what it is or how it works.

     

    Imagine sending a modern jet back two hundred years, would they understand the technology and be able to copy it? Doubtful. We are st a stage where we don't understand the concept and so have no hope of copying it.

     

    Control - Can we control an AI once it has been created?

     

    The simple answer that I can see is yes. Without going into too much detail about programming it is possible to program a system in such a way that certain aspects can't be removed even if the code can be self-modifies, modularization for example. And let us not forget that any machine has one easy weakness, it can be turned off.

     

    On a side note, if something were intelligent then it is reasonable to believe it would be able to understand right from wrong. If it were to "learn" from people and what the information fed to it was morally good, wouldn't it be safe to assume that it would also base it's moral compass upon those rather than just picking one at random? People can be evil by choice or by nurture and so would an AI.

  8. As I understand it, string theory cant be proven. It could be mathmatically discribed in whole however, and all simulations based on it would match the observations of our universe. It would become something that works and is freakin beatifull man:D

     

    Technically speaking, few theories can be proven as is shown by Gödel's incompleteness theorems. We more or less always have approximations anyway so nothing is ever exact enough to test with 100% assurance.

     

    That is a good thing really as ever more accurate answers give better and better tests for existing theories.

  9. I wonder if they can actually handle all that server stress. They've had problems in the past where the upgrade systems have failed under the load but hopefully they've corrected that... otherwise the servers could go down and that would sort of spoil the record attempt eh? *shrugs*

  10. I think the point is just to download it and get a +1 for the record, a lot of people will probably update automatically and download the actual file just to add some points.

     

    As this has never been done before it's a guaranteed record anyway (apparently) :|

  11. Let me first stress that rollovers are a REALLY bad idea if you wish your content to be accessible so I would advise that you not use them.

     

    The problem is with your JavaScript event handling; it could be more simple done as follows:

     

    <img src="rollover0.png" id="Phenylethylamine" onmouseover="this.src='rollover1.png'" />

     

    ... because the code in the mouseover event handler will be interprited in the context of its parent element (in this case the image tag) it will work just fine to reference to the element as "this"

     

    Hope that helps.

  12. Its to do with things like the rate of rotation of stars about the galaxy's center. They think that from the rate they are orbiting there must be some large mass there pulling them around but its too large and to small to be anything other than a black hole.

     

    You can't actually see a black hole its self but you can observe its effects as described in this post and my first one too :)

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.