Jump to content

Bernd.Brincken

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by Bernd.Brincken

  1. No disagreement, or 'no dichotomy', as my friend Petr would say. You can describe intelligence in this way, but you will find other, much tighther definitions, for example as the basis of intelligence tests. But this is a question completely seperate from the title of this thread; and IMHO fruitless to follow.
  2. Why would you expect any more insight if you "get into its head"? Is current medicine not able to get into peoples head? And what have they learned? And IMHO, an exact definition of 'intelligence' is not neccessary. Look at the AGI description: "... understand or learn any intellectual task that a human being can." This can be verified or falsified, with a good chance for consensus, without a discourse about this term.
  3. It is also not in the scope of this thread 😉 - which is about Strong AI, or Artificial General Intelligence (AGI). Wikipedia describes it, following other sources, as "the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can." This is much more than "computational problems", because humans solve many problems that are not computational, esp. those with interactions between humans. But if AGI might (only) be achieved by "coming alive", that's a separate discussion. Just "evolving out of resources" is not neccessarily life, it just means that a system was not created intentionally. BTW, I added the "hypothetical" in the WP article, plus two sources, in the course of this discussion.
  4. The scenario of AI evolvement was a concession that the 'creators' need not understand and design an AI in order to .. let it happen. If we humans 'jump over that stage' in order to achieve the aim less 'slow and resource intensive' - we'd have to understand the conditions, techniques and paths. And then we encounter precisely the obstacles that were discussed before in this thread.
  5. taeto, AFAIK the idea is that AI (-life?) evolves amid a soup of resources, materials and energy and patterns around it; which may have been supplied by humans. Creation has an aspect of intention - which may not need to be the case. Given this scenario, I can not understand why a 'lower' AI lifeform would not manifest before a 'higher' one. So one of these 'lower' - not-yet-strong - AIs would be the first evolved AI species that we humans see. Where is it?
  6. Yes, I meant 'species' in biologic terms, like blatella germanica, see one of my former posts. About AI species vs. human researcher - there are some slight differences: The AI species still has to learn everything about its environment, all natural phenomenon, its own survival - and finally about human beings (to make the step from AI to AGI). The human researcher species went through these processes already, over thousands of years, or millions if you count his ancestor species. So, no, the human researcher need not interact with ants to understand them enough to be able to, for example, program (important parts of) their behaviour in software.
  7. We were discussing the probability of strong AI, and my main argument was: If an AI species seeks to understand human behaviour, it has to interact with humans in their interaction patterns. This is important for anybody interested in the topic of this thread. Note it. If the topic does not interest you, you will surely find better entertainment elsewhere. Yes, but then you have build the whole environment around it that made humans learns their instincts. Reminds me of the 42 chapter in Douglas Adams' Hitchhikers Guide to the Galaxy.
  8. Strange, if an AI species seeks to understand human behaviour, it can (likely) not read the human dendrite's signal directly or drink their brain fluid to gather this understanding. It (likely) has to interact with humans in their interaction patterns. How much time do you typically know a person before you trust her and report about your feelings, fears, dreams etc. - days? Or more like months or years? So, IMHO, this process can not be significantly accelerated by technology. Instincts not transmitted genetically - Hermeneutics tell me that it is impossible to proove that something does not exist. But it looks like the dominating attitude among biologists. Like: "Accordingly [to Hailman], instincts are not preprogrammed, hardwired, or genetically determined; rather, they emerge each generation through a complex cascade of physical and biological influences" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5182125/
  9. I did not say that, not in these nor other words. And by the way, I also do not support it - see my proposal to choose an animal species as a benchmark for AI progress. In principle, an AI species could learn everything that humans did, but they would need a similar time; then just economy-wise it is not probable. Seems like you want to hear this simplification because it is easy to argument against. About instincts, the theory that they are genetically based is not supported by genetics - AFAIK no instinct has been identified in any genome up to date. Always willing to learn. Basically, let us come back to the title of this thread - Evidence for Strong AI No evidence has been presented so far - right?
  10. Sorry for the delay, I was ordered to empiric social research in Cologne carnival. Just a tiny bit of them, and I would expect AGI only to understand that tiny bit as well. Understanding not neccesarly in logic or scientific analysis, also the aforementioned (probably) intuitive things like interpersonal communication. If you want to build 'certain behaviour' into an AI, you would have to understand this - instinct. So (how) do we know the logic or content or message of instinct? To me, 'instinct' sounds like a term for 'some behaviour the sources of which we can not (yet) explain'. But I'm willing to learn. Oh, I have to look that up. "Intelligent design (ID) is a pseudoscientific argument for the existence of God" (WP) And this is a science forum, right? Me. If they can not understand humans in the way we do, they can not understand human behaviour, interactions, wishes, markets, politics, culture. In this case, why would we want to attribute any 'intelligence' to these systems?
  11. Exactly. This wisdom has been formulated in the 'semiotic trinangle' concept: https://en.wikipedia.org/wiki/The_Meaning_of_Meaning Ok, so the AI species would also have to undergo their own evolution in order to gain instincts. Then again, it is barely probable that they attain (/acquire) the same instincts as humans - they could still not understand humans, in the same way that humans understand humans.
  12. Instinct, ok. How do creatures learn instinctive behaviour? Or how is it transfered to them? As long as the specific path of transfer is not understood, the idea of instincts in an AI is pure speculation, pseudoscience. BTW, I did study machine learning, and I did lectures on it, the first in 1989, and I talked to Marvin Minsky about it in Linz in 1990 (ars electronica). Little has changed since then. - Neural networks do work, no doubt, and this alone is still astonishing to most 'classical' programmers. But no system in the field of AI has reached even the level of a cockroach, yet. Not in 1990, not in 2000, not in 2010, etc. So, regarding good advices, maybe you should read my book "Künstliche Dummheit" (Artificial Stupidity). If it is not binary - i.e. one can clearly say "I can walk" vs. "I can not yet walk" - how does the AI know if it should continue dragging?
  13. It does not make it easier for the AI cause if you prefer to mix the terms in this way 😉 There are non-language interpersonal signals in human relations which rely on similar experiences: Joy, grief, fear, success, embarrassment, mobbing, humor, desparation, optimism etc. How do you imagine an AI to gather (or be trained on) these experiences, in order to be able to understand humans? Yes, this works. But what is the relation to human-human ( /-AI) interpersonal communication? Walking either works or not. As long as I fall - I have to try again, wisely with modified movements. In interpersonal communication - and further into society - there is no direct, binary success-feedback of this kind.
  14. For training you need some kind of media or language to transfer knowledge from trainer to trainee. As described in 'interpersonal communication' (WP), many signals are not expressed in words, and may not even be expressible in language. Experience does not need this. A (human / AI) 'intelligent' system can interpret such a signal out of the experience with the same situation.
  15. Let's assume humans learn them by experience. If experience was not neccessary, if this wisdom could be transfered completely by training - in what kind of media would you explain the meaning of a gesture to our AI? In written words?
  16. And how did they learn these facial expressions and gestures? How do humans learn them?
  17. You expected AGI to manifest in chat bots, right? Then they are restricted to language, how ever logical it may be. So let me broaden the argument: Communication among humans is not restricted to language. For the difference between (online) chat and (RL) communication, see WP: Interpersonal communication
  18. Ad 1: I was answering to the statement 'Babies are not born understanding, but learn to. Why, in principle, could this not be the same of AGI?' I did not say that AGI tries to be something. Ad 2: I don't get the joke, sorry. Whom do you mean? Ad3: Greater developement is fine. Just by experience, simpler solutions can be reached before. Before human being get cloned, we are presented cloned sheep. Before people set food on Mars, they do it on the moon, and robots land on Mars. Before we all change over to electric cars, some models are already available and we can learn from them. Why should AGI appear in one step, out-of-nowwhere?
  19. Ad 1, conversation among humans is not restricted to logical statements, see the 'Semiotic triangle' as a small hint, and 'Sociology' as the broader perspective. Practically, there is barely any interaction in day-to-day life where symbolic and recursive elements ('I respect anyone who respects me') are _not_ at play. For the AI project, without showing its own weaknesses, failures, ambitions - basically emotions - it is not plausible that humans will open their 'heart and mind' for the AI conversation. My theory is that these emotions - based on experience - can only evolve among humans in the timeframe they are used to. As a side aspect, how do you imagine the physical appearance of the AI that has 'conversations with thousands of humans simultaneously'? Purely virtual, like a chat bot? Ad 2, first I try to restrain from any 'advocation', I just want to offer the AI proponents any optimistic assumption that could help their case 😉 If 'pure AI' would facilitate a Super-human intelligence, ok fine. Then why should the same technology (or phenomenon) not facilitate a Super-cockroach intelligence?
  20. Sorry, I meant 'worth' not worse. So again, the AI-learns-to-be-human project could be worth the insight, even if it is costly, lengthy and - IMHO - prone to failure. About the 'secret program' - the situation is analog to the UFO situation - 'Maybe they are already among us'. Yes, maybe they are, but as long as they do not give press conferences at the Cologne cathedral, 5th avenue and Taj Mahal, we (non-secret-service-members) can not know - and should not care. The other argument was - if super-AI is possible or in delopement behind doors, a lesser, animal-like AI should be possible in front of the doors. That would also be a scientific and mass media phenomenon that the project team would be proud to present.
  21. It could, in principle. But it would take as much time as among humans alone - around 20 years. The outcome would also be uncertain, as among humans. And, socially, the AI would surely experience exclusion ('racism'), with various side effects. Anyhow, such a project could be worse just for the insights. But, before, one could expect a system on a lower level of complexity. My proposal was recently: blatella germanica
  22. Strange, I have explained "because we are x" in more detail in my publication. In short: Human beings, and their mental features, can not be derived from individual properties ('brain') alone. When they appear in groups (german 'Angehäuftsein'), new properties, features and powers come into play. Strong AI would have to communicate _with_ humans to become part of, or extend, these features. To communicate, it has to understand (more than words). And to understand, it has to experience a human life. So 'computer AI' would not suffice, it would have to be life (-like), in several aspects. That aside, we see the phenomenon that strong AI is presented as something to happen soon, not only by Scifi authors and 'visionary' scientists, like in previous waves, but by politicians, fortune-500 companies and many mass media. Why do they do that? Can we call Strong AI 'Pseudoscience'?
  23. 'Artificial general intelligence (AGI) .. [or strong AI] is the intelligence of a machine that can understand or learn any intellectual task that a human being can.' (Wikipedia) Is there any evidence for such a system? I have done a good year of research and published on the matter, and up to now I have not read about such a system. Furthermore, predictions of strong AI soon to come have been given by reputable scientist like Marvin Minsky several times - and failed. For the time being, Strong AI looks like Speculation to me. But maybe I have a blind spot, are lack of access to the latest developements. Enlightenment please.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.