Jump to content

Kyrisch

Senior Members
  • Posts

    836
  • Joined

  • Last visited

Everything posted by Kyrisch

  1. They are actually circular. If the earth was not in the way, the rainbow would appear in a giant ring around your conical zone of vision at the constant angle that allows the light to break up into its constituent colours.
  2. Probably a typo on his part, so just for clarification.
  3. "This hardly seems like a well thought-out strategy." That's the first clue that it's precisely what she had in mind.
  4. It seems to me that iGod works the same way Smarterchild does. It has standard responses for certain phrases, sentences-types, et cetera, but only answers line-by-line and forgets context. I wasn't very impressed. Akinator is interesting though. I have one of those 20-questions balls at home that's pretty good if you pick something normal, but fails for the really esoteric stuff. I always used to think that if I could input my answer at the end it would work really well. That's a bit more related, even though it isn't a chatbot, because it has a user-manipulated, growing database.
  5. As many here already know, the Turing Test is a test for strength of artificial intelligence proposed by computer scientist Alan Turing. It consists of a human and a computer interacting through text (much like modern-day instant messaging). If the human being cannot tell that the computer is a computer and mistakes it for a real human being, it has passed the Turing Test. Now, many have tried (and failed) to come up with so-called "chatbots". I remember back when AOL first came out, "Smarterchild" was very popular. The problem with Smarterchild, however, was that it had simple, stored responses for a given category of questions/statements. As such, it would never add anything to the conversation, it would always respond less relevantly, and it did not 'remember' any more than the last line of conversation, responding to that directly, and then terminating. It was a very dull program. Cleverbot is different. Cleverbot is a chatbot that learns appropriate context from the users it interacts with. When presented with a statement or a question, it searches its databases to find if it has ever posed a similar question to a user, and then spits out what that user responded with. This simple process has created a surprisingly dynamic (though often wacky) conversationalist. After a few days of interaction, I can see why an earlier prototype of Cleverbot (called George, I think) won the Bronze Medal for Most Convincing Human Interaction (no one has won the silver or gold). However, there are some very interesting "patterns of behaviour" that I've noticed that have made me wonder about whether or not the Turing Test is a good test for artificial intelligence after all. Among many things, Cleverbot appears extremely clever. I have had experiences where it has responded appropriately to sarcasm, to emoticons, and to other nuances that one would not ordinarily expect something of that sort to pick up. However, it is just an illusion. The bot is not actually comprehending the words, or understanding the tone, but merely is a cross-checking its vast database for proper context. It's matching, at the very most. I liken it to Linnaeus, the father of the "tree of life", who first put together a nested hierarchy merely through morphological context, with zero regards to genetic similarity or evolutionary history, and yet was superbly successful in producing a tree very similar to the one held now by scientific consensus. But did Linnaeus himself actually understand any biology? i.e. processes, mechanisms, evolutionary advantages? Patently not. And it is just as obvious that Cleverbot doesn't actually "understand" English. Or think it all. For instance, this odd illusion of intelligence has some strange quirks which expose what is really going on, which I will show you with real examples: This is particularly funny because it becomes very obvious why it 'thinks' it's human after a little bit of pondering. Imagine you're a user. If the bot asks you if you're human, you will probably respond with a statement of the obviousness of the answer, and the ludicrousness of the question. Further, every single user it encounters claims that it is a robot. Since it learns contextual appropriateness from the users, it follows that its conversational "stance" will always be "Of course I am human, and you are the bot." However, it will never learn that it's not human, and that the users are not bots. In fact, it will never learn anything, because it does not think. And yet, the illusion that it believes that it is human is so strong, because one can argue with it for hours, and it comes up with splendid, myriad arguments that sound just like they came from a real person's mouth (because they did). However, in terms of learning context, it is easily 'teachable': The reason I started with such a random statement was to prompt a random statement from the bot. I needed it to say something arcane because it is more likely that the more random statements do not already have any contextually appropriate responses associated with them. I then made a random statement myself, waited an exchange, and then repeated the arcane statement it had made earlier. Because it 'learned' that flax is contextually associated with 1 million whatsits, and nothing else was already associated, that's what it spit out. Even more amusingly, you can perpetuate the loop, because all you have to say is it's own response to your previous original 'taught' statement and the three statements become circularly contextually linked. However, this has only worked a few times (and did not work right now, which is why I don't have an example), because once you have three statements involved, there is a little bit more randomness in the processes which dictate the bot's responses. So, comments? Is Cleverbot an accomplishment? Will it ever pass the Turing Test? Does it show that the Turing Test does not actually test "intelligence"?
  6. Thanks of course for the praise but also for the criticism. This has been the most thorough and by far the best assessment of this 'pet project' of mine I have received. It seems like internal consistency is the main problem. But, if I didn't 'sound American', that should be within the realm of the fixable, since it's obvious I can make the correct sounds, I just need to refine it a bit. So, at this point, I'm wondering if anyone would be cool enough to perhaps supply a file of their own accent, with a description of where they are from, reading some sample text so I can hear the differences across dialects and attempt to compartmentalize the different variations? I googled "sample reading texts" and this is what came up. It should work well enough. Thank you much.
  7. This is what I found with a few seconds of google searching. Seems pretty solid, geology-wise, and rather in-depth without going too far (it's a three-page paper).
  8. Found it... http://www.news14.com/content/local_news/triangle/611427/raleigh--sewer-creature--surprises-city-officials/Default.aspx
  9. I think I get it -- each row across is, consecutively, 24, ?, 24, 21. But that's not really a puzzle, it's an easily solvable system of equations with four variables. In fact, since there are only four variables, it could be solved for using just the values of the columns and ignoring the rows altogether.
  10. It depends on what you're doing with it and how many significant figures you need on the weight. For instance, if you're doing something like dissolving the powder chemical in some solvent, but you only have the volume of the solvent to one significant figure, it's not necessary to measure to the hundredth of a gram (3 sig-figs) because any calculations will only be significant to one digit.
  11. Have I set up the question so there is no answer? We've answered it all well and good regarding so many other things, that I didn't think it would be such a problem. Something I realized recently is that being "kind" to people may be rationally permitted by the fact that much of society behaves like the prisoner's dilemma. If we act selfishly, everyone loses. If we act selflessly, everyone wins. And not just cases where you get direct benefits from being altruistic, but also simply upholding the principle of the golden rule allows the standard to be such that everyone gains in the end. So in a way, upholding the principle in general yields the greatest benefit, rather than taking things case-by-case (as is what rationality often demands). This position, in game theory lingo, is called superrationality. This, however, does not extend into the animal realm, and I still see no rational defence of, say, helping a beached whale.
  12. And religious belief is the same thing, but we've quelled that with our intelligence. So, my question is, where is the intelligent motivation to do such things?
  13. You still must defend the process of the cost/benefit analysis. In a recent thread, called "Science-based Morality" or something to the effect, you were all for the idea, and that concept is what I'm trying to apply here. Our scientific knowledge should inform our decisions on how to treat others, not our knee-jerk emotional responses. So, for instance. Valid situations: -Saving a wild animal whose extinction would disrupt the ecosystem and cause untold horrors on our agriculture. -Being kind to your family members so your family life is not a living hell. See how for each of them, the reasoning behind it is delineated? Where's the reasoning behind giving a hobo spare change? Behind keeping a beached whale alive for hours on the sand? I know it's not directly related, but it is a good analogy. The answer to "Why be superstitious?" is actually quite easy. And it is to forge casual connections between events that we can show are valid, i.e. the scientific method. So, in short, we should be superstitious where it ultimately helps us, and should not waste resources where it doesn't. Now, why is it so hard to transfer that simple concept across to morality? Everyone who says "there's no reason not to save the dying whale" is committing the same fallacy as religious people do when they say things like "there is no evidence against God." That's not how science works, and that's not how intelligence works. But that's absolutely irrational, and I know you know it is. The video in the other thread (which is a lot more related to this than I realized, maybe I'll dredge up the link) pointed out how some systems of ethics can be flat-out wrong (wearing burqas, for example). I think avoiding swatting flies (which is on the same level as crying over a squashed woodlouse or the principles of our odd resident a while back, "Green Xenon", whose skin was basically one giant bacterial culture) is patently irrational.
  14. I agree. What we're discussing is the nitty-gritty. What qualities determine whether or not something is 'worthy' of empathy? And this decision must be made rationally, so we fall back on the original question: Why be kind? or more relevantly, to what or whom should we be kind (if anything/anyone)? Merged post follows: Consecutive posts merged I understand, but you of all people would understand how it is our job to go against our better feelings. In all the threads treading on the heels of superstition, you speak strongly against it. This is the same thing. Even though being superstitious was evolutionarily beneficial, even without provisions for excess, our better knowledge provides for us those provisions. Asking "why are people superstitious" and "why should be we superstitious" are two different things. You answered the former, but I am trying to apply rational thought the why we ought to. For instance, asking "I don't see any reason not to avoid causing harm to animals" is comparable to "I don't see any reason not to avoid stepping on the cracks in the pavement". Neither of these have any answer beyond "because there's no reason to do so."
  15. Thank you all for the input. I apologize if I was mumbling, or quiet, it was late at night and I didn't want to wake anyone up.
  16. Short response: Intelligence trumps emotion (or should). For instance, recently on Facebook someone created a poll on abortion, asking, "do you think abortion should be legal?" and including a gruesome picture of a bloody fetus. Pro-choice is not pro-abortion, and even though everyone's stomach turns when they see that picture (just as everyone has the tendency to empathize with animals and fellow people), modern science has shown (many of) our intelligences that abortion is a necessary evil. In the same way, my argument is not that we should go out of our way slaughtering animals and people alike, but rather that we should not go out of our way to treat them well and likewise not go out of our way to avoid treating them poorly. Merged post follows: Consecutive posts merged But then you start sliding down the slippery slope. If it's irrational to care about the "feelings" of a fly, when does it become rational to care about the feelings of anything?
  17. Why should it matter? Saying why not be kind, when you haven't shown a reason to/i[] be kind is like saying "why not avoid all the cracks on the sidewalk?" It's because it's dreadfully inefficient. While a lot of the KFC scandals are unnecessary cruelness perpetrated by sadists, vegans often argue that the contraptions to which cows are hooked to most efficiently collect milk are horribly uncomfortable and unethical. How about the veal that is raised in a box so as to keep the meat tender? I understand why we feel the empathy we do. It's similar to our sweet tooth, which evolved without provisions for excess. But overapplication leads to such ludicrousness as people being angry at someone swatting a fly. But if we deem that ludicrous, where can we draw the line? The original reason[i/] is because altruistic behaviour ultimately helped ourselves (or the spreading of our genes). But, like fake sugar, our task as intelligent beings is to reason through our primitive emotions and attempt to act rationally.
  18. Bump because I thought this question would garner more discussion, and the above did not really address the issue, but rather sort of conceded to the same slippery slope argument I presented.
  19. Oh.... I'm sorry You didn't have a viewable location and I just assumed. That's not good then... Any idea as to what makes it sound Australian?
  20. ...Doesn't count if you're American because, unfortunately, so would a lot of Americans; as it seems, nearly every dialect except the one the Queen uses sounds 'Australian' to American ears. This is why I'm interested to hear from an actual Brit.
  21. So here's a zipped mp3. I decided to read a short news story on BBC, and because I was reading and not speaking I think it ended up sounding a lot more "posh", more like "received pronunciation" than I usually sound. But other than that, I think it's a decent presentation of my best try at imitation. It also sounds distinctly Australian to me at one point, which is bad. Oh well. british accent.mp3.zip
  22. House does an amazing job with his accent. I was very very surprised when I first learned that he was British. As for state, again I think there is much less dialect density in the states as there is in the UK, so it would be hard to say. He does sound pretty "generic" which means Obama-ish or "East Coast", I guess. Also, I'm going to make that audio file when I can. insane alien's idea to zip it and attach it was what I had originally planned, but I wasn't sure if people were comfortable downloading random files. And again, any suggestions for what I should read?
  23. Actually, some medical professionals have publicly stated their belief that the singer had body dysmorphic disorder, a psychological condition whereby the sufferer has no concept of how they are perceived by others, which could explain his near-dozen completely transforming (and increasingly unsettling) plastic surgeries.
  24. This is a question I've never really resolved in my own mind, and I was hoping to get some external commentary. I was reminded yesterday about the issue when a friend and I were talking about PETA's reaction to the Obama fly-swatting video. Apparently they were upset at the actions of the president for unnecessarily ending the life of the poor fly. This is the stupidest thing I have ever head; there is no such thing as animal cruelty to animals which cannot feel. Further, our conception of what is "pain" is only comparable to what other conscious animals feel. This is because the pain we feel is an experience, and experiences are only meaningful to things with consciousnesses. However, even animals with consciousnesses -- why should we care whether or not they are unhappy? In principle, there is something that compels people to say "we should", but that's just misapplied knee-jerk programmed empathy whose purpose was to enable intimate social interaction, not to 'protect the rights of animals'. And, as evidenced by the number of non-vegetarians in the country, people (even people who are fully aware of the horror of slaughterhouses), simply don't feel the compulsion not to eat meat. (Or said compulsion is not strong enough to make them not want to eat meat.) So there is some cognitive dissonance between what people say is right and what people actually do (or care enough to think about). But I have actually thought about it. And while I feel that same empathy towards inflicting pain in conscious creatures, I simply cannot put a finger on any truly objective, rational argument for why we should care. At this point, I was somewhat disheartened, but not completely. I am not, of course, a vegetarian, so it rather validated a position I was holding by default in the first place. But then I realized, if there is no reason to care about animal happiness, why should we care about fellow human happiness (where, of course, it does not affect our own)? I struggled with this for quite some time, and I could not come up with a cohesive answer. It's a horrible prospect to conceive, but I see no solid line of reasoning against it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.