Jump to content

Bernd.Brincken

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by Bernd.Brincken

  1. No disagreement, or 'no dichotomy', as my friend Petr would say.

    You can describe intelligence in this way, but you will find other, much tighther definitions, for example as the basis of intelligence tests.

    But this is a question completely seperate from the title of this thread; and IMHO fruitless to follow.

  2. Why would you expect any more insight if you "get into its head"?
    Is current medicine not able to get into peoples head? And what have they learned?

    And IMHO, an exact definition of 'intelligence' is not neccessary. Look at the AGI description:
    "... understand or learn any intellectual task that a human being can."
    This can be verified or falsified, with a good chance for consensus, without a discourse about this term.

  3. It is also not in the scope of this thread 😉 - which is about Strong AI, or Artificial General Intelligence (AGI).
    Wikipedia describes it, following other sources, as "the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can."
    This is much more than "computational problems", because humans solve many problems that are not computational, esp. those with interactions between humans.
    But if AGI might (only) be achieved by "coming alive", that's a separate discussion.
    Just "evolving out of resources" is not neccessarily life, it just means that a system was not created intentionally.

    BTW, I added the "hypothetical" in the WP article, plus two sources, in the course of this discussion.

  4. The scenario of AI evolvement was a concession that the 'creators' need not understand and design an AI in order to .. let it happen.

    If we humans 'jump over that stage' in order to achieve the aim less 'slow and resource intensive' - we'd have to understand the conditions, techniques and paths.
    And then we encounter precisely the obstacles that were discussed before in this thread.

  5. taeto, AFAIK the idea is that AI (-life?) evolves amid a soup of resources, materials and energy and patterns around it; which may have been supplied by humans.
    Creation has an aspect of intention - which may not need to be the case.

    Given this scenario, I can not understand why a 'lower' AI lifeform would not manifest before a 'higher' one.
    So one of these 'lower' - not-yet-strong - AIs would be the first evolved AI species that we humans see.
    Where is it?

  6. 21 hours ago, taeto said:

     The term "species" .. there is a common use of the term in biology, is that the intended one?

    Would it be important in some context to understand the statement If a human researcher seeks to understand the behaviour of ants, they have to interact with ants in their interaction patterns

    Yes, I meant 'species' in biologic terms, like blatella germanica, see one of my former posts.

    About AI species vs. human researcher - there are some slight differences:

    • The AI species still has to learn everything about its environment, all natural phenomenon, its own survival 
      - and finally about human beings (to make the step from AI to AGI).
    • The human researcher species went through these processes already, over thousands of years, or millions if you count his ancestor species.

    So, no, the human researcher need not interact with ants to understand them enough to be able to, for example, program (important parts of) their behaviour in software.

  7. We were discussing the probability of strong AI, and my main argument was:

    If an AI species seeks to understand human behaviour, it has to interact with humans in their interaction patterns.

    This is important for anybody interested in the topic of this thread. Note it.
    If the topic does not interest you, you will surely find better entertainment elsewhere.

     

    On 2/29/2020 at 8:47 PM, Strange said:

    You could design a system that learns its instinctive behaviour in the same way

    Yes, but then you have build the whole environment around it that made humans learns their instincts.
    Reminds me of the 42 chapter in Douglas Adams' Hitchhikers Guide to the Galaxy.

  8. Strange, if an AI species seeks to understand human behaviour, it can (likely) not read the human dendrite's signal directly or drink their brain fluid to gather this understanding. It (likely) has to interact with humans in their interaction patterns. How much time do you typically know a person before you trust her and report about your feelings, fears, dreams etc. - days? Or more like months or years?
    So, IMHO, this process can not be significantly accelerated by technology.

    Instincts not transmitted genetically - Hermeneutics tell me that it is impossible to proove that something does not exist.
    But it looks like the dominating attitude among biologists. Like:
    "Accordingly [to Hailman], instincts are not preprogrammed, hardwired, or genetically determined; rather, they emerge each generation through a complex cascade of physical and biological influences"

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5182125/

  9. 22 hours ago, Strange said:

    Do you have anything to say other than "humans are special therefore AI is impossible"?

    I did not say that, not in these nor other words.
    And by the way, I also do not support it - see my proposal to choose an animal species as a benchmark for AI progress.
    In principle, an AI species could learn everything that humans did, but they would need a similar time; then just economy-wise it is not probable.
    Seems like you want to hear this simplification because it is easy to argument against.

    About instincts, the theory that they are genetically based is not supported by genetics - AFAIK no instinct has been identified in any genome up to date. Always willing to learn.

    Basically, let us come back to the title of this thread - Evidence for Strong AI
    No evidence has been presented so far - right?

  10. Sorry for the delay, I was ordered to empiric social research in Cologne carnival.
     

    On 2/20/2020 at 4:43 PM, dimreepr said:

    Are you sure you understand... humans???

    Just a tiny bit of them, and I would expect AGI only to understand that tiny bit as well.
    Understanding not neccesarly in logic or scientific analysis, also the aforementioned (probably) intuitive things like interpersonal communication.

     

    On 2/20/2020 at 5:46 PM, Strange said:

    If you wanted to build certain behaviours in to an AI (by suitable programming) I'm not sure why that would be "pseudoscience".

    If you want to build 'certain behaviour' into an AI, you would have to understand this - instinct.
    So (how) do we know the logic or content or message of instinct?
    To me, 'instinct' sounds like a term for 'some behaviour the sources of which we can not (yet) explain'.
    But I'm willing to learn.

    On 2/20/2020 at 5:46 PM, Strange said:

    But we can also short-circuit the need for evolution by doing "intelligent design". 

    Oh, I have to look that up.
    "Intelligent design (ID) is a pseudoscientific argument for the existence of God" (WP)
    And this is a science forum, right?

    On 2/20/2020 at 5:46 PM, Strange said:

    who says that they need to [understand humans in the way humans do], anyway?

    Me. If they can not understand humans in the way we do, they can not understand human behaviour, interactions, wishes, markets, politics, culture.
    In this case, why would we want to attribute any 'intelligence' to these systems?

  11. On 2/18/2020 at 3:43 AM, Sensei said:

    If AI has to understand human words must be able to see and hear. ..  Human without data from other senses is unable to imagine. During teaching of human, words are correlated with images and sounds and touch and smell etc. Together they are full information about the subject.

    Exactly.
    This wisdom has been formulated in the 'semiotic trinangle' concept:
    https://en.wikipedia.org/wiki/The_Meaning_of_Meaning

    2 minutes ago, dimreepr said:

    Evolution, at a guess...

    Ok, so the AI species would also have to undergo their own evolution in order to gain instincts.
    Then again, it is barely probable that they attain (/acquire) the same instincts as humans - they could still not understand humans, in the same way that humans understand humans.

  12. On 2/17/2020 at 9:08 PM, Strange said:

    Some of these may be instinctive in humans. (Although I don't think that is certain.) In which case, you build the same knowledge into the AI.
    ...
    I think you should go and study some examples of machine learning (for communication, interaction, mechanical skills, etc) before dismissing it as implausible. An argument from ignorance or incredulity is never convincing. As I said in my first post.

    Instinct, ok. How do creatures learn instinctive behaviour? Or how is it transfered to them?
    As long as the specific path of transfer is not understood, the idea of instincts in an AI is pure speculation, pseudoscience.

    BTW, I did study machine learning, and I did lectures on it, the first in 1989, and I talked to Marvin Minsky about it in Linz in 1990 (ars electronica).
    Little has changed since then. - Neural networks do work, no doubt, and this alone is still astonishing to most 'classical' programmers.
    But no system in the field of AI has reached even the level of a cockroach, yet.
    Not in 1990, not in 2000, not in 2010, etc.
    So, regarding good advices, maybe you should read my book "Künstliche Dummheit" (Artificial Stupidity).

    On 2/17/2020 at 9:08 PM, Strange said:

    Walking is not a binary thing, either. The AI robot's first attempt, after falling over once, might be to proceed by falling, dragging itself forward ...

    If it is not binary - i.e. one can clearly say "I can walk" vs. "I can not yet walk" - how does the AI know if it should continue dragging?

  13. 2 hours ago, Strange said:

    Training can use exactly the same techniques as "experience".

    Training and experience are the same thing.

    It does not make it easier for the AI cause if you prefer to mix the terms in this way 😉

    There are non-language interpersonal signals in human relations which rely on similar experiences:
    Joy, grief, fear, success, embarrassment, mobbing, humor, desparation, optimism etc.
    How do you imagine an AI to gather (or be trained on) these experiences, in order to be able to understand humans?

    2 hours ago, Prometheus said:

    Machine learning employs .. unsupervised techniques .. supervised learning .. and reinforcement learning ..

    A robot may then learn to walk by the experience of continually falling down via reinforcement learning. No words are needed, only a sense of balance.

    Yes, this works.
    But what is the relation to human-human ( /-AI) interpersonal communication?
    Walking either works or not. As long as I fall - I have to try again, wisely with modified movements.
    In interpersonal communication - and further into society - there is no direct, binary success-feedback of this kind.

  14. For training you need some kind of media or language to transfer knowledge from trainer to trainee.
    As described in 'interpersonal communication' (WP), many signals are not expressed in words, and may not even be expressible in language.
    Experience does not need this.
    A (human / AI) 'intelligent' system can interpret such a signal out of the experience with the same situation.

  15. On 2/12/2020 at 5:28 PM, Strange said:

    Computers can recognise human expressions and gestures and modify their responses appropriately. They can also have (physical or virtual) avatars that create appropriate expressions and gestures.

    And how did they learn these facial expressions and gestures?
    How do humans learn them?

  16. On 2/11/2020 at 3:07 PM, Prometheus said:

    What's the relevance of human conversation not being restricted to logical statements? Do you imagine that computers are limited to receiving logical statements as inputs?

    You expected AGI to manifest in chat bots, right?
    Then they are restricted to language, how ever logical it may be.
    So let me broaden the argument:
    Communication among humans is not restricted to language.
    For the difference between (online) chat and (RL) communication, see WP: Interpersonal communication

  17. 6 hours ago, dimreepr said:

    1. Strong AI (AGI) isn't trying to-be-human.

    2. Well, we know they aren't already among us. 😉

    3. Why not the greater delopement?

     

    Ad 1: I was answering to the statement 'Babies are not born understanding, but learn to. Why, in principle, could this not be the same of AGI?'
    I did not say that AGI tries to be something.

    Ad 2: I don't get the joke, sorry. Whom do you mean?

    Ad3: Greater developement is fine. Just by experience, simpler solutions can be reached before.
    Before human being get cloned, we are presented cloned sheep.
    Before people set food on Mars, they do it on the moon, and robots land on Mars.
    Before we all change over to electric cars, some models are already available and we can learn from them.
    Why should AGI appear in one step, out-of-nowwhere?

  18. 22 hours ago, Prometheus said:

    [1.] Why do think it would take 20 years? It conceivable that an AI could have conversations with thousands of humans simultaneously...

    [2.] So you advocate brain emulation as opposed to 'pure' AI solutions?

    Ad 1, conversation among humans is not restricted to logical statements, see the 'Semiotic triangle' as a small hint, and 'Sociology' as the broader perspective.
    Practically, there is barely any interaction in day-to-day life where symbolic and recursive elements ('I respect anyone who respects me') are _not_ at play.
    For the AI project, without showing its own weaknesses, failures, ambitions - basically emotions - it is not plausible that humans will open their 'heart and mind' for the AI conversation.
    My theory is that these emotions - based on experience - can only evolve among humans in the timeframe they are used to.
    As a side aspect, how do you imagine the physical appearance of the AI that has 'conversations with thousands of humans simultaneously'? Purely virtual, like a chat bot?

    Ad 2, first I try to restrain from any 'advocation', I just want to offer the AI proponents any optimistic assumption that could help their case 😉
    If 'pure AI' would facilitate a Super-human intelligence, ok fine. Then why should the same technology (or phenomenon) not facilitate a Super-cockroach intelligence?

  19. 19 hours ago, dimreepr said:

    If you accept the principle.

    why would this follow?

    They have, so a reasonable assumption.

    Sorry, I meant 'worth' not worse.
    So again, the AI-learns-to-be-human project could be worth the insight, even if it is costly, lengthy and - IMHO - prone to failure.

    About the 'secret program' - the situation is analog to the UFO situation - 'Maybe they are already among us'.
    Yes, maybe they are, but as long as they do not give press conferences at  the Cologne cathedral, 5th avenue and Taj Mahal, we (non-secret-service-members) can not know - and should not care.

    The other argument was - if super-AI is possible or in delopement behind doors, a lesser, animal-like AI should be possible in front of the doors.
    That would also be a scientific and mass media phenomenon that the project team would be proud to present.

  20. It could, in principle.
    But it would take as much time as among humans alone - around 20 years.
    The outcome would also be uncertain, as among humans.
    And, socially, the AI would surely experience exclusion ('racism'), with various side effects.
    Anyhow, such a project could be worse just for the insights.
    But, before, one could expect a system on a lower level of complexity.
    My proposal was recently: blatella germanica

     

  21. Strange, I have explained "because we are x" in more detail in my publication.
    In short: Human beings, and their mental features, can not be derived from individual properties ('brain') alone. When they appear in groups (german 'Angehäuftsein'), new properties, features and powers come into play. Strong AI would have to communicate _with_ humans to become part of, or extend, these features. To communicate, it has to understand (more than words). And to understand, it has to experience a human life. So 'computer AI' would not suffice, it would have to be life (-like), in several aspects.

    That aside, we see the phenomenon that strong AI is presented as something to happen soon, not only by Scifi authors and 'visionary' scientists, like in previous waves, but by politicians, fortune-500 companies and many mass media. Why do they do that? Can we call Strong AI 'Pseudoscience'?

  22. 'Artificial general intelligence (AGI) .. [or strong AI] is the intelligence of a machine that can understand or learn any intellectual task that a human being can.' (Wikipedia)

    Is there any evidence for such a system?
    I have done a good year of research and published on the matter, and up to now I have not read about such a system.
    Furthermore, predictions of strong AI soon to come have been given by reputable scientist like Marvin Minsky several times - and failed.

    For the time being, Strong AI looks like Speculation to me.
    But maybe I have a blind spot, are lack of access to the latest developements.
    Enlightenment please.
     

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.