Jump to content

NLN

Members
  • Posts

    27
  • Joined

  • Last visited

Posts posted by NLN

  1. Hi folks. A bit of shameless self-promotion here, but I hope you'll find that it's worth it.

     

    I just finished an interview with Daniel Everett, a linguist with Illinois State University who has attracted a storm of controversy with this theory of language that contradicts Chomsky's Universal Language. The implications are profound for cognitive science, and for defining what makes us human.

     

    Machines Like Us interviews Daniel Everett

     

    I'd also like to take this opportunity to wish everyone a prosperous new year.

     

    All the best,

     

    Norm Nason

  2. In the recent election, California State ballot Proposition 8--eliminating same-sex marriage--posed complex ethical, legal, religious, and scientific questions.

     

    Proposition 8 is the California State ballot proposition that would amend the state Constitution, to limit marriage to unions between a man and a woman--overturning a recent California Supreme Court decision that had recognized same-sex marriage in California as a fundamental right.

     

    The official ballot title language for Proposition 8 was, "Eliminates Right of Same-Sex Couples to Marry." On the day after the election, the results remained uncertified. With 100% of precincts reporting, the vote was 52.5% in favor of Proposition 8 and 47.5% against, with a difference of about 504,000 votes; as many as 3 million absentee and provisional ballots remain to be counted. The organizers of the "No on Prop 8" campaign conceded defeat on November 6, issuing a statement saying, "Tuesday’s vote was deeply disappointing to all who believe in equal treatment under the law."

     

    The passage of Proposition 8 means that the legality of same sex marriage in California has been determined by popular vote--rather than by the state Judiciary. This sets an unsettling legal precedent. Setting aside the question of why one group of people should care about the private lives of others--should all matters concerning individual rights be left to popular vote?

     

    Opponents of Proposition 8 argue that it is unethical to deny one minority group a fundamental right held by the majority (marrying a partner of one's choice). Proponents counter that no such discrimination is occurring--that Proposition 8 eliminates the right of all individuals to marry others of the same sex, and does so equally. In their view, the fact that certain individuals wish to marry partners of the same sex--while others do not--is of little consequence.

     

    While many proponents of Proposition 8 site the Bible as justification for their belief that same-sex marriage is morally 'wrong,' many opponents believe that antiquated Biblical passages have no relevance in today's world, and that all humans deserve equal rights under the law.

     

    How are we to sort this out?

     

    Some forms of religious upbringing and cultural norms have been shown to limit rates of homosexuality (for instance, by teaching that homosexuality is 'sinful'). But are they truly limiting homosexuality, or simply suppressing genetically pre-determined characteristics? If sexual orientation is learned behavior or merely a matter of choice, society arguably has a right to intervene; only voluntary actions and choices can be considered right or wrong.

     

    But if sexual orientation is biologically pre-determined, then it is difficult to make the case for limiting the marriage rights of one subset of humans over any another. There is great diversity among the human beings, and preserving and protecting that diversity--the right of individuals to remain unique--is essential for any progressive society.

     

    Homosexual behavior occurs in the animal kingdom, especially in social species--particularly in marine birds and mammals, monkeys, and the great apes. Homosexual behavior has been observed among 1,500 species, and in 500 of those it is well documented. This discovery constitutes a major argument against those calling into question the biological legitimacy or naturalness of homosexuality, or those regarding it as a meditated social decision. For example, male penguin couples have been documented to mate for life, build nests together, and to use a stone as a surrogate egg in nesting and brooding. In a well-publicized story from 2004, the Central Park Zoo in the United States replaced one male couple's stone with a fertile egg, which the couple then raised as their own offspring.

     

    The genetic basis of animal homosexuality has been studied in the fly Drosophila melanogaster. Here, multiple genes have been identified that can cause homosexual courtship and mating. These genes are thought to control behavior through pheromones as well as altering the structure of the animal's brains. These studies have also investigated the influence of environment on the likelihood of flies displaying homosexual behavior.

     

    Georgetown University professor Janet Mann has specifically theorized that homosexual behavior, at least in dolphins, is an evolutionary advantage that minimizes intraspecies aggression, especially among males. Studies indicating prenatal homosexuality in certain animal species have had social and political implications surrounding the gay rights debate.

     

    Is sexual orientation a matter of choice? Mounting evidence seems to point against it. What becomes clear is that petitions, ballot measures, and preaching from the pulpit will not resolve this complex issue. Only science can determine the outcome, and until the science is in, we would be wise to move slowly and gently; with tolerance and compassion.

     

    Machines Like Us

  3. Was HAL, the computer featured in Stanley Kubrick's film 2001: A Space Odyssey, a sentient being, or merely the product of "brute force" computation?

     

    Since his debut in 1968, HAL has served as the guidepost for artificial intelligence research. More than any other character in fiction, he has represented the enormous potential of the field, and has helped to launch the carriers of many an AI researcher. Calm, rational, and eerily human, HAL would certainly pass the Turing test. But was he actually a conscious being -- an emergent byproduct of some robust future algorithm -- awake and aware of his surroundings? Or was he instead a masterpiece of human simulation, produced by the interplay between cleverly designed software and extremely fast -- but conventional -- hardware?

     

    Of course, HAL is imaginary, but his legacy reminds us that achieving the appearance of human-like machine intelligence need not necessarily require true sentience. In the film, scientists clearly are uncertain whether or not HAL is conscious:

     

    Reporter
    :
    The sixth member of the Discovery crew was not concerned with the problems of hibernation, for he was the latest result of machine intelligence: the H.A.L. 9000 computer, which can reproduce -- though some experts still prefer to use the word mimic -- most of the activities of the human brain, and with incalculably greater speed and reliability.

     

    HAL, on the other hand, makes a case for his own self-awareness:

     

    HAL
    :
    I enjoy working with people. I have a stimulating relationship with Dr. Poole and Dr. Bowman. My mission responsibilities range over the entire operation of the ship, so I am constantly occupied. I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.

     

    Much air play has been given to the glamorous future of artificial intelligence, the dawn of sentient machines. But little attention has gone toward imagining a less glamorous -- but arguably more realistic -- future in which machines might be constructed to appear conscious, without actually being so.

     

    In his book, The Age of Spiritual Machines, Ray Kurzweil predicted that by the year 2019 a $1,000 computing device will have the computational ability of the human brain. He further predicted that just ten years later, a $1,000 machine will have the computing capacity of approximately one thousand human brains. Regardless of whether or not you agree with Kurzweil's timetable, one thing is certain: computer "horsepower" has increased dramatically since their inception, and seems likely to increase just as dramatically in the near future.

     

    Let us imagine that it is 2019 and software development has advanced no further than today, while hardware has progressed to the point where it matches the computational ability of the human brain (estimated by Kurzweil to be 20 million billion calculations per second). Even with present-day software, the sheer horsepower behind such hardware will make such systems capable of amazing things. Is it possible that problems like computer vision, knowledge representation, machine learning, and natural language processing will be solved by brute force computation, even if no new software efficiencies are implemented?

     

    Consider the progress made in Chess playing computers. For a long time in the 1970s and 1980s it remained an open question whether any Chess program would ever be able to defeat the expertise of top humans. In 1968, International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years. He won his bet in 1978 by beating Chess 4.7 (the strongest computer at the time), but acknowledged then that it would not be long before he would be surpassed. In 1989, Levy was defeated by the computer Deep Thought in an exhibition match.

     

    Chess algorithms work not by reproducing human cognitive processes, but by examining future moves. They have attained tournament-level playing ability almost exclusively due to dramatic speed increases in their number-crunching hardware. In their book, How Computers Play Chess, researchers David Levy and Monty Newborn estimated that doubling the computer speed gains approximately fifty to seventy ELO* points in playing strength.

     

    As Nigel Shadbolt of the University of Southampton said: "I believe that massive computing power directed with a light touch toward interesting parts of a problem space can yield remarkable results."

     

    I asked a few AI researchers what they thought about the possibility of brute force computation eventually simulating human intelligence, and here is what they told me:

     

    Steve Grand:

     

    Take the simplest possible method of brute force AI: a straightforward list of the answers to all anticipated questions. You can draw a network of all the possible relationships between objects and verbs, representing the number of possible questions and answers. If the knowledge domain only has one object then there are very few questions that could be asked about it. If there are two objects then you can ask questions about each object, but also about relationships between the two (is lead denser than water?). As you add more objects the number of questions rises as the factorial. Clearly there are more questions that could be asked about a world containing a few dozen objects than there are subatomic particles in the entire universe. So quite quickly you reach the point at which the universe simply isn't big enough to hold a computer that could store the necessary answers. So you obviously have to take a more sophisticated approach.

     

    The most sophisticated approach would be an accurate model of a human brain, configured by real personal experiences of the world. This is clearly capable of passing the Turing test and it scales efficiently but it's Strong AI. So where is the point between these two extremes at which the results result from a cheat are sufficiently convincing -- and does this method of representation scale well enough not to require a computer larger than the number of bits in a manageable chunk of the universe? My feeling is, it doesn't scale well at all -- there is no substitute for the structure of the brain itself -- the brain is its own best description and any other valid description contains vastly more bits than a brain (or even a thousand brains).

     

    Ben Goertzel:

     

    I think that faking intelligence in a Turing-test context is almost surely possible, but only using many orders of magnitude more computing power than exists in the human brain. Mathematically, one can prove that it IS possible if one has sufficiently much computing power -- but this theorem doesn't tell you much of any practical use, because the proof doesn't tell you whether the amount of computing power is, say, bigger or smaller than the amount of computing power in the entire universe.

     

    Hugo De Garis:

     

    There's a huge difference between high bit rate computers and high intelligence. A high bit rate is a necessary condition for an intelligent machine but not sufficient. To be sufficient, the bits in the circuitry have to be connected in brain like ways, but we don't know how to do that yet. We will probably have to wait until nanotech gives us powerful new tools to investigate the principles of how the brain performs its magic. Then we can put those principles into machines and get the same level of intelligence performing a million times faster, i.e., at light speed compared to chemical speed. My view of the timing is that we won't have real nanotech until the 2020s, then in the 2030s there will be an explosion of knowledge in neuroscience, which we will be putting into brain like computers in the 2040s.

     

    Steve Lehar:

     

    The Turing test is a pretty strange benchmark from the outset. The idea is to restrict the 'user interface" to something that computers can handle, like a text I/O interface. But the REAL mark of intelligence, human or otherwise, is the ability to walk into the room and sit down in front of the user interface, with the kind of grace and elegance that a cat or lizard or snake can demonstrate, even if they can't figure out how to read the screen or type an input. If we could replicate even THAT degree of intelligence and biomechanical grace, we would be much farther advanced in replicating human intelligence.

     

    I think the Turing test is a very biased and restricted benchmark, designed more to demonstrate the "abilities" of our stupid digital computers than to release the potential of true human or animal intelligence. How about an anti-Turing test, where the creature or machine has to walk into a room, identify where the user interface is located, and sit down in front of it? How long would Kurzweil suppose it take before we can accomplish THAT in artificial intelligence?

     

    One of the big surprises of the search for artificial intelligence has been the fact that the "white collar" type tasks, such as calculating Boolean logic, solving differential equations, navigating a spaceship to the moon and back, are apparently the "easy" problems of computation, while the more basic "blue collar" tasks of getting dressed in the morning, identifying the wife and kids to communicate the appropriate kisses and nods, and driving the body to work, are actually the REAL frontiers of human intelligence; we have NO IDEA how they are done.

     

    Paul Almond:

     

    What do you mean by a Turing test pass? Do you mean fooling the average person into thinking that they are talking to a human for 5 minutes? 5 days? 5 years? As an example, would you require the machine to reproduce anything like the detailed e-mail exchanges we have had for a long time now? Would you expect the e-mail messages you have sent me to be answerable, to some degree?

     

    I think this is where we can run into problems. Given a prolonged enough exchange, passing the Turing test would probably be as hard as having full consciousness anyway -- because of the scope a person has for catching the computer out -- so I don’t really see a proper Turing test pass as a particularly easy problem.

     

    I think that mimicry of consciousness would imply consciousness, but I don’t think it could be done by brute force. I think it would require cleverness in software design of some kind. This means I do not expect huge processing power, in itself, to deliver a Turing test pass. However, when we get such [super-fast] hardware, a lot of AI research will become easier. Furthermore -- and this is a big point -- lots of AI algorithms that might have been impractical before now become practical, so a lot of speculation can now be tested experimentally. I think it would speed up AI research a great deal, and the start of true AI might emerge not many years after.

     

    There is one exception where brute force could clearly deliver AI. That is, if you had the ability to somehow record the structure of a human brain with sufficient accuracy. You could then get a computer to “run” your image of a human brain and you would have an AI system: you would not know how it worked, of course, and your AI may not thank you for it; it would have the memories and personality of whatever brain you used. You would not know how this AI system worked without some research. It would not even know itself.

     

    In 2001: A Space Odyssey, HAL acted as if he was conscious -- but was he? We'll never know for sure, but if one day brute force computation conquers many of the problems associated with artificial intelligence, the question of machine sentience may be a whole lot easier to answer.

     

    Reporter
    :
    Do you believe HAL has genuine emotions?

     

    Dave Bowman
    :
    Well, he acts like he has genuine emotions. Of course, he was programmed that way to make it easier for us to talk with him. But as to whether or not he has real feelings is something I don't think anyone can truthfully answer.

     

    __________

    *The ELO rating system is a method for calculating the relative skill levels of players in two-player games such as chess and Go.

     

    Machines Like Us

  4. I just finished reading a remarkable biography of Albert Einstein, and want to recommend it. It's called Einstein, His Life and Universe. It's the first biography to tackle Einstein's enormous volume of personal correspondence that until recently had been sealed from the public. It's hard to imagine another book that could do equal justice to Einstein's richly textured and complicated life. It's really a wonderful read, and tells us not only about Einstein's science, but his personal life as well.

     

    Machines Like Us

  5. Here's why I don't think it will be possible for a human to ever travel either forward or backward in time: both time and motion would prevent it.

     

    Why time itself is the problem:

     

    Let's say we have built a time machine. A human steps into it, expecting to be transported to another time. But because he has a physical body composed of trillions of spatially separate and distinct molecules, each molecule would have to be simultaneously transported and reassembled at the other end. It would all have to occur at exactly the same instant, because if it did not, one part of the body would dematerialize / materialize while another part had not yet done so -- essentially tearing the human to pieces. Since it has been shown that there is no such thing as an exact, static instant in time -- the very premise is impossible.

     

    Why motion is the problem:

     

    Think of the Earth spinning on its axis; the Earth revolving around the sun; our entire solar system contained in one of the arms of the Milky Way, revolving around the galactic center.

     

    If we were to transport a human back in time, where would we send him? The present day Earth is in an entirely different location than it was in the past, or will be in the future. In fact, it is now in an entirely different location than when I began this sentence! In order to "land" a body in the past, for instance, one would have to know the exact coordinates of every molecule of that body -- both in the present and in the designated "landing site" of the past.

     

    If you began to transport your body to the entrance of Central Park as it was exactly 50 years ago, a second later the Earth will have rotated on its axis some thousands of meters -- not to mention the movement of the entire solar system, etc.

     

    You see my point? There are no precise instants of time or space from which (and to which) we may instantaneously move. Hence, the entire process is impossible.

     

    Machines Like Us

  6. When I was in the sixth grade (longer ago than I care to admit), the elementary school I attended administered a program that has benefited me ever since. Not long before graduating, a single week was set aside to prepare departing students for their move up the ladder of higher education. For five days we no longer attended a single classroom, but rather six, as would later be the case in Junior High, High School and College. Separate teachers instructed us on a variety of topics: Music, History, Art, Science, Literature, and, my favorite—Critical Thinking. This class was taught by Mr. Anderson, a teacher unfamiliar to me at the time.

     

    Although the specifics of Mr. Anderson's instruction escape me, the essence of it was this: Think about what you are saying and doing. When observing the world, make critical, logical deductions. Try to figure things out for yourself, rather than believing everything you hear out of hand. Insist upon getting the facts, and learn to recognize them; find proof. Say precisely what you mean; do exactly what you resolve. Demonstrate conviction in your thoughts and actions. Be decisive. Use your noodle!

     

    Amazing stuff, coming from an elementary school teacher; don't you agree?

     

    Throughout my life I have often thought about what Mr. Anderson said, and have tried my best to apply it. I've observed that many of man's false beliefs could have been avoided if only the majority practiced critical thinking. Take, for example, the once widely held belief that the Earth was flat. A critical thinker might look up into the sky, see the sun and moon, and from this deduce that the Earth was not flat at all, but round. Why? The moon and sun are round; perhaps it is more logical to conclude that the Earth is similar, rather than different. True, you might say, but how do we know they are not simply flat disks, like coins, rather than spheres? The answer: when we observe the phases of the moon, a shadow moves nightly across its face, having specific visual characteristics. When we try to duplicate these characteristics experimentally, we find that the only way they may be replicated are on the surface of a sphere. All right, you might admit, the Earth could be round, but how shall we prove it? Perhaps in the manner that the young Columbus is reported to have done: by observing the masts of departing sailing vessels sinking ever lower on the horizon. Or, we might do as a Greek philosopher did around 250 B.C.:

     

    Eratosthenes was told that on a certain day during the summer (June 21) in a town called Syene, which was 4900 stadia (1 stadia = 0.16 kilometers) to the south of Alexandria, the sunlight shown directly down the well shafts so that you could see all the way to the bottom. Eratosthenes knew that the sun was never quite high enough in the sky to see the bottom of wells in Alexandria and he was able to calculate that in fact it was about 7 degrees too low. Knowing that the sun was 7 degrees lower at its highpoint in Alexandria than in Syene and assuming that the sun's rays were parallel when they hit the Earth, Eratosthenes was able to calculate the circumference of the Earth using a simple proportion: C/4900 stadia = 360 degrees/ 7 degrees. This gives an answer of 252,000 stadia or 40,320 km, which is very close to today's measurements of 40,030 km.

     

    You get the idea. Critical thinking has solved many of the world's mysteries—perhaps most—and is the spearhead of human progress. Continental drift, implied by a map of the world; a case for mass extinctions (and even human evolution), deduced from observing the craters of the moon; clues coming to light about brain function by noting the time lag between reaction to stimulus and conscious awareness...all are examples of critical thinking at work.

     

    Mr. Anderson, wherever you are, I thank you.

     

    Machines Like Us

  7. In this interview, molecular biologist Johnjoe McFadden discusses human cognition, synthetic life, and artificial intelligence. An excerpt:

     

    "The basic problem is that our subjective experience of consciousness does not correspond to the neurophysiology of our brain. When we see an object, such as a tree, the image that is received by our eyes is processed, in parallel, in millions of widely separated brain neurons. Some neurons process the colour information, some process aspects of movement, some process texture elements of the image. But there is nowhere in the brain where all these disparate elements are brought together. That doesn’t correspond to the subjective experience of seeing a whole tree where all the leaves and swaying branches are seen as an integrated whole. The problem is understanding how all the physically distinct information in our brain is somehow bound together to the subjective image: the binding problem."

  8. The universe’s clock has neither a start nor finish, yet time is finite -- according to a New Zealand theorist. The theory, which tackles the age-old mystery of the origin of the universe, along with several other problems and paradoxes in cosmology, calls for a new take on our concept of time -- one that has more in common with the “cyclic” views of time held by ancient thinkers such as Plato, Aristotle and Leonardo da Vinci, than the Christian calendar and Bible-influenced belief in “linear” time now so deeply imbedded in modern western thinking.

  9. A new interview with evolutionist/atheist Mano Singham can be found here. To quote him:

     

    "Once you concede the idea of a god, you have ceased to think rationally in that area of your life, and are prey to those who preach extreme forms of religion. Of course, most people do not go so far, but that is because most people are not really that religious, though they say and act like they are. In the TV show House, someone asks the title character whether he is an atheist and he replies "Only on Christmas and Easter. The rest of the time it doesn't seem to matter." I think he is right. Most people are just nominally religious and unlikely to go off the deep end. It is the deeply religious who can be persuaded to do appalling things in the name of god because it is only they who will let their humane and ethical and common senses be overridden by the idea that god wants them to commit specific acts."

  10. Ben Goertzel is CEO of Novemente, a software company racing to develop the first artificially intelligent agent for Second Life, the internet virtual environment. His creation will learn by interacting with Second Life participants, and Ben in confident that it will meet -- and eventually exceed -- human level intelligence.

     

    In this new Machines Like Us interview, Ben discusses his projects in detail.

  11. In an interview with cognitive scientist Steven Lehar, he argues that the world we see is actually a sort of simulation, represented in our brains. It's weird stuff, but his arguments are quite compelling. Highly recommended reading. I went on to read several of his papers as well. They can be quite technical, but I must say that he may be onto something.

  12. British artificial intelligence researcher Paul Almond has a new on-line interview available, in which he says:

     

    "We had optimistic expectations about when true intelligence or sentience would be achieved in artificial devices, but I think that it is possible. Intelligent machines already exist -- ourselves. The fact that matter can naturally come together to make things like humans that think shows that the process can be replicated. Of course, people argue against this. Some people say we have some kind of “immaterial” or “supernatural” soul. I think that is an incoherent concept. Roger Penrose and John Searle both argue against artificial intelligence using computers in different ways -- and I think they are both wrong."

     

    He also discusses Asimov's "3 laws" at length, among other interesting things. The full interview may be viewed here.

  13. I'm a Mac user, and have used Safari since its inception, and it has always been my browser of choice. If it functions the same on a PC, I think PC users are really going to like it. Safari has a nice look, is quite speedy, and on a Mac shares bookmarks with the Apple address book application (handy!).

  14. People create beings that are more intelligent than they are all the time. Many mentally retarded couples, for instance, can foster extremely intelligent, superior offspring. And humans also have the advantage of their numbers: no one person is capable of building a modern computer or automobile from scratch, but with many people working on the problem, it can be achieved easily.

     

    So it will be with artificial intelligence. We don't have to know how to "program" an AI; only how to build an "infant" AI -- then help it to learn and develope as we do with human children. This is the approach Steve Grand and many AI researchers are taking.

  15. Today I read an article saying that scientists have created the world's first human-sheep chimera—which has the body of a sheep and half-human organs. They're working on being able to grow most or all human organs in animals—so they can later be transplanted into humans who need them.

     

    Eventually, they hope to precisely match a sheep to a transplant patient, using their own stem cells to create their own flock of sheep. The process would involve extracting stem cells from the donor's bone marrow and injecting them into the peritoneum of a sheep's fetus. When the lamb is born, two months later, it would have a liver, heart, lungs and brain that are partly human and available for transplant.

     

    Although this would be wonderful for humans needing new organs, it does raise some ethical questions. We do grow sheep (and other livestock) for food—but for organ use?

  16. There are two types of AI, and they are not generally distinguished between in the media: soft AI and hard AI. Soft AI includes most of the AI you see out there: systems that are designed to mimic certain kinds of human behavior. Robots like Asimo fall into this category, as do robot vacuum cleaners, internet search engines, speech synthesizers and voice recognition systems, computer vision systems, neural nets and data miners.

     

    Hard AI research, on the other hand, strives to actually create systems that can think—first, as an insect or an animal might—and later as a human does. Some of these researchers are trying to engineer intelligence from scratch, while others are attempting to understand and model the human brain, and reproduce it artificially. For these individuals, building a sentient machine means producing a system that experiences the world as an infant human would—sensing and interacting with its environment—learning from trial and error and "growing up" over time. Since this goal is far more complex and will take longer to achieve, it is not as well funded as soft AI, and only the smartest and truly dedicated individuals are working on it.

     

    I have made it my goal to seek out the people who are working on hard AI, and learn as much as I can about what they are doing. For those of you are truly serious about the subject, I recommend the following articles:

     

    Machines Like Us

     

    Embodied Cognition

     

    Saving Machines From Themselves: The Ethics of Deep Self-Modification

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.