Jump to content

EdEarl

Senior Members
  • Posts

    3454
  • Joined

  • Last visited

Posts posted by EdEarl

  1. Some think a sentient AI needs emotions, which may be true. But, AI can be significantly improved over current capabilities without emotions. Sam can't taste anything and can't process the sandwiches; it just needs to be plugged. But, some day I expect an AI being will be capable of eating herring sandwiches and using the energy as we do.

  2.  

    Wikipedia

    Vacuum energy is an underlying background energy that exists in space throughout the entire Universe.

    Clearly the vacuum energy has not been measured everywhere. Can I assume the vacuum energy is more or less constant everywhere, or is it possible some places have a different vacuum energy than other places?

  3. If all else is the same, the two eggs will harden equally. However, the eggs sit on the bottom of the pan and the egg in a fast boiling pan may pick up additional heat from the bottom of the pan and harden a bit quicker. However, the gain is small for the amount of energy lost by boiling quickly.

  4. Fear isn't the only motivator. Actually, hunger should work all on its own. You train the system to decrease its feeling of hunger. Eating decreases hunger. Once it tries eating, that should reinforce itself well enough just with that.

     

    Even most people eat because more they don't want to feel hungry rather than because they are afraid of dying if they don't get food.

    Currently implemented AI systems don't have any feelings. As those systems are improved, they will not suddenly experience feelings. For example, Google Translate can be improved so it does a better job of translating. For an AI system to experience any emotion, fear, hunger, frustration, love, etc. someone must design subsystems to emulate emotions.

     

    Suppose Google merges their search engine with translation, mapping, scholar, improves it so that you can interact with it verbally, and they call it Google Chat. You can talk to Chat, like you can talk to a person. Chat has no emotions, it just does net searches and interacts with you more or less like a research librarian. Does it need emotions? Would a research librarian with attitude be a benefit?

  5. If we want to look at this at least slightly more rigorously, it might be worth considering what emotion actually is from a results-oriented perspective rather than just how they feel and how people stereotypically act as a result of emotion.

     

    In short, you respond to input differently when you are angry (or sad, happy, etc) than when you are not.

     

    There's no reason an AI couldn't be programmed with situationally dependent adjustments to the weights in a decision tree (which is effectively what an emotional state would look like in an AI from a practical perspective) but there's no reason that an AI's emotions would have to look anything like a humans, or that one's responses to those emotions would have to resemble a humans.

     

    Anger, for example, is a response to a situation with no apparent acceptable solution. You could program an AI to adjust its decision making when encountering such a problem. It may even be a good idea to allow for a change in the way decisions are made when there doesn't seem to be an acceptable decision available under the normal way of processing them, but there's no reason that the new set of responses would need to be violent ones just because that's how humans tend to react. You would need to hardwire or teach the AI a violent set of responses, when you could program or teach it a set of responses to the situation that are entirely unlike what a human would have.

    AI is not programmed like an accounting system; there is a reason AI cannot be programmed to adjust its decision making. Microsoft recently put an AI teen girl online and within 24 hours deleted her.

     

    Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours

    One programs an AI learning system, and lets it learn, much as people learn. Without emotions there will be unpredictable results. If you add emotions, you must decide what effect the emotion will have before learning begins. Once the learning begins, the emotional system is on automatic. There will be even more unpredictable results, which is beyond my ability to SWAG. Even without emotions, my SWAGs are iffy.

     

    I'm not saying AI should not be built with emotions. However, one typically engineers novel things beginning simple and mastering it. Then, add a little complexity (one emotion) and master that. Then add another. I'd recommend not even putting in a hunger circuit to begin. Without fear of "death" it would have no particular reason to "eat," and should expire if a person didn't plug it in to recharge its batteries.

     

    A pathological killer may not empathize, but they must feel emotion when killing. Otherwise, why would they kill.

  6. Let's consider how sam's neurons might be made. I can think of two technologies.

    1. Synthetic nanotechnology neurons that cannot be programmed, and
    2. Tiny microprocessors simulating neurons that can be programmed.
    3. PS a third, using the WWW.

    Sam is a research and development project, and researchers will want to be able to improve on the neurons as their research discovers additional things about biological neurons. They will prefer technology 2, microprocessors, but it is conceivable that the microprocessor solution will be too large, too power hungry, or limited in another way; thus forcing researchers to use technology 1.

     

    If tech 2, then it seems reasonable that sam would learn to reprogram his neurons and, thereby, increase its intelligence, possibly making super intelligence. If tech 1, then the only option for increasing intelligence would be to add neurons to the brain (also possible for tech 2), which is more difficult than reprogramming. Adding neurons might also be obvious to everyone, because the container for sam's brain (head) must be larger.

     

    Since these are future technologies, it is necessary to add a caveat. Sam might be able to reprogram tech 1.

     

    There may be no advantage to reprogramming the neurons; in which case, adding neurons would be the only possibility of sam making itself super intelligent.

     

    PS. Sam would almost certainly use cloud resources to increase its capability. It might totally take over the WWw and leave man without a network, just to make itself smarter.

     

    If sam is built without emotion, it wouldn't want increased intelligence. It might, however, decided it needed more intelligence to solve some problem. Although, I do not know of such a problem.

  7. To what extent can he identify his emotional states and the emotional states of others? Is this also on par with humans or is it "quicker" as are his reasoning and problem solving skills? If he only has the ability to recognize his own emotional states then he won't consider others, but he could also learn to recognize others emotional states and act on them.

    Including emotions introduces not so scary scenarios, too. In both cases, scary and not scary, emotions complicate our thought experiments, and complicate the job of designing and building sam.

     

    Some research AI systems today simulate emotions, but AFAIK current production level AI used for playing games (go, chess, and jeopardy), used by businesses (financial transactions), and used by the public (voice recognition) do not include emotions. These systems will improve over time, and new ones with ever greater capabilities will be developed. Thus, it seems reasonable that the first sam will not have emotions; of course, this may be an incorrect assumption.

  8. Why would the law be of Sam's concern? I'd be afraid of clever ways of hiding evidence or preventing his implication in any crime.

    Again, what if Sam is clever enough to avoid being a suspect? Or what if Sam is clever enough to persuade legal figures to aid him?

     

    I think part of the problem in understanding what he would possibly do is how vaguely defined Sam is.

    "Why," yes, agree.

    "Again," yes agree.

    "Vaguely defined," true, we can only imagine by assuming sam's brain is made of artificial neurons; thus, its thinking processes are similar to ours. Consider sam has been built either with or without emotions, and estimate sam's thought processes. We can assume sam's brain was modeled after ours; thus, will think in a similar manner. It's the best we can do ATM. It may be incorrect, because intent to simulate us will not necessarily achieve a reasonable facsimile. In other words, sam may be mentally deranged. In this case, we must hope the developers can turn sam off.

  9. I think men take advantage of women way too often when they have children. Moreover, children are a cultural resource, not just family of the parents and other relatives. A culture that mistreats children will not be healthy and may die. Thus, I believe a parent or parents should receive aid for dependent children that permits a parent to care for their children. I think women would take advantage of this aid more than men; OK.

  10. Suppose there is a sentient artificial man (sam) who is unemotional and logical more or less like humans, except quicker. Sam might decide to be either theist, agnostic or atheist. Except for religious reasons, sam would have no reason AFAIK to kill people. If his religion were bigoted towards a group, sam might kill them and get away with it, but most would live. If sam were agnositc Buddhist, not even worms would be killed. If sam were rational, I can think of no reason anyone would be killed. And, other scenarios don't lead to human extinction, AFAIK.

     

    If the inventors gave sam emotions, then who knows what would happen.

  11. If Artificial Intelligence were in existence at this exact moment there are several possibilities of what will happen.

    1. Classic Terminator problem

    2. Bio-Mechanical equality

    3. Advance Rate of development

    The common use of the term AI is different than yours; IMO you refer to Artificial General Intelligence; its just a semantic difference. Although, some people see AI and deny it, which is called the AI effect. I've heard people refer to current AI systems as toys; although, some AI systems do financial transactions for people with big money, so those people probably don't consider AI as toys.

     

    Some have compared current AI to a cockroach, and say the cockroach is more intelligent, but there is little to compare when you look at the details of each one. Nonetheless, I have to agree; although, the best AI may outperform a cockroach.

  12. Does the absorption and emission spectrum for some element are the same .

    I think I read this question differently from swansont, since he seemed not to answer it the way I read it. I read the question as, "Do different elements sometimes have the same emission spectrum?' IDK.

  13. Is the salt, table salt, that is NaCl?

    Did you use distilled water?

    The red is probably copper oxide.

    The blue might be copper hydroxide.

    Black may be another copper oxide.

    If your water has chlorine in it, you probably have some copper chloride (two kinds like copper oxide).

    The bubbles are probably hydrogen, which is is very flammable...explosive.

    The water may contain NaOH (lye) if you use chlorinated water.

     

    There are some good chemists here that will point out any mistakes or omissions I've made.

  14. Did anyone at least watch the two videos of 4 kilobyte demos I posted? I like going off-topic too... :)

    I watched the second and third. IDK what 4KB has to do with those demos, one frame of 480x640 monochrome is 307,200 pixels. With 256 gray scales (1 byte) that would be 307,200 bytes. Those videos were 1080p resolution color; thus, needed way more pixels per frame.

  15.  

    I believe the brain is quite simple in its own very complex way. It only seems complex to us because of our perspective. We attribute characteristics of language to the individual because we must understand it through the only medium we have to think; language. People have even gone so far as to say "I think therefore I am" when the reality is we must first exist and learn language in order to think at all. Thought doesn't prove one's existence merely that language exists. Even the lowest life forms know they exist and need no founding principles to avoid predators.

     

    I believe there is a natural integration of senses, mind, and body in animals and this is the operating system of the animal brain. This operating system is also the animal language which they use to communicate within and across species (to a more limited extent). Humans must unlearn this in order to acquire our language.

     

    The problem here is very very simple; everything we are thinking, all our knowledge, and the very means we use to think are obscured by the perspective generated by language. When we think about sentience, ai, and intelligence we are actually pondering language and its effects rather than consciuousness or cleverness. When we try to program or design a computer to be "intelligent" we are actually programming it to manipulate language. It seems very improbable that sentience or consciousness will simply emerge through such an ability.

     

     

    I'm in general agreement but I still don't like the term "intelligence". I'd prefer to say that most cleverness is specific to both the individual and his experience as well as to the species. An artist elephant might is far less likely to invent new techniques or processes to paint than a human artist yet more likely than a human cosmologist. But almost any beaver is far more likely to come up with a new means of building a wooden dam with no lumber than any human.

     

    I seriously doubt modeling the human brain is even possible until we can distill the structure from the operating system we call language. I'm not so sure the human brain is even best suited to machine intelligence. It's quite possible that far simpler brains like insects would be more suitable. If an insect can navigate a car then why can't it serve as the formatting for machine intelligence?

    Some good points and some weak.

  16. Any camera picks up both emitted and reflected light/infrared (EMR). Better cameras will pick up lower levels of EMR. It's really about how much you want to spend on a camera, whether or not it detects what you want it to. The inexpensive security cameras need infrared light to illuminate the scene.

     

    IDK if this camera will do what you need, its specifications include the following:

     

    Exceptional thermal sensitivity of < 0.02°C at +30°C

    It is necessary to find out how much temperature change in air occurs from acoustic waves.

  17. My first experience with a computer was an IBM 1620. It was a decimal machine, with 60,000 digits of core (magnetic torus) 20 micro second memory; circuits were made with germanium transistors. It had an IBM typewriter and big card reader for input, the typewriter and card punch for output. A disk that sprung a leak once and flooded the computer room floor with hydraulic fluid. It compiled Fortran and was really good machine to learn on, but odd by today's standards.

  18. HarvardProstateKnowledge.org

    As part of Harvard’s Health Professionals Follow-up Study, 29,342 men between the ages of 46 and 81 reported their average number of ejaculations per month in young adulthood (ages 20–29), in mid-life (ages 40–49), and in the most recent year. Ejaculations included sexual intercourse, nocturnal emissions, and masturbation. Study participants also provided comprehensive health and lifestyle data every two years from 1992 to 2000. The scientists found that men who ejaculated 21 or more times a month enjoyed a 33% lower risk of prostate cancer compared with men who reported four to seven ejaculations a month throughout their lifetimes.

    IDK if this is reliable, but sounds like having fun is good for you.

  19. IDK, the people who once sold butane, and switched me to propane, said it was for safety as I said before. I didn't investigate. It was many years ago.

     

    I just looked, and Butane is still sold in bulk, in Texas. Maybe it was a city ordinance, or maybe the company wanted to sell propane instead of butane.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.