Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Bernd.Brincken

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. No disagreement, or 'no dichotomy', as my friend Petr would say. You can describe intelligence in this way, but you will find other, much tighther definitions, for example as the basis of intelligence tests. But this is a question completely seperate from the title of this thread; and IMHO fruitless to follow.
  2. Why would you expect any more insight if you "get into its head"? Is current medicine not able to get into peoples head? And what have they learned? And IMHO, an exact definition of 'intelligence' is not neccessary. Look at the AGI description: "... understand or learn any intellectual task that a human being can." This can be verified or falsified, with a good chance for consensus, without a discourse about this term.
  3. It is also not in the scope of this thread 😉 - which is about Strong AI, or Artificial General Intelligence (AGI). Wikipedia describes it, following other sources, as "the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can." This is much more than "computational problems", because humans solve many problems that are not computational, esp. those with interactions between humans. But if AGI might (only) be achieved by "coming alive", that's a separate discussion. Just "evolving out of resources" is not neccess
  4. The scenario of AI evolvement was a concession that the 'creators' need not understand and design an AI in order to .. let it happen. If we humans 'jump over that stage' in order to achieve the aim less 'slow and resource intensive' - we'd have to understand the conditions, techniques and paths. And then we encounter precisely the obstacles that were discussed before in this thread.
  5. taeto, AFAIK the idea is that AI (-life?) evolves amid a soup of resources, materials and energy and patterns around it; which may have been supplied by humans. Creation has an aspect of intention - which may not need to be the case. Given this scenario, I can not understand why a 'lower' AI lifeform would not manifest before a 'higher' one. So one of these 'lower' - not-yet-strong - AIs would be the first evolved AI species that we humans see. Where is it?
  6. Yes, I meant 'species' in biologic terms, like blatella germanica, see one of my former posts. About AI species vs. human researcher - there are some slight differences: The AI species still has to learn everything about its environment, all natural phenomenon, its own survival - and finally about human beings (to make the step from AI to AGI). The human researcher species went through these processes already, over thousands of years, or millions if you count his ancestor species. So, no, the human researcher need not interact with ants to understand them enough to
  7. We were discussing the probability of strong AI, and my main argument was: If an AI species seeks to understand human behaviour, it has to interact with humans in their interaction patterns. This is important for anybody interested in the topic of this thread. Note it. If the topic does not interest you, you will surely find better entertainment elsewhere. Yes, but then you have build the whole environment around it that made humans learns their instincts. Reminds me of the 42 chapter in Douglas Adams' Hitchhikers Guide to the Galaxy.
  8. Strange, if an AI species seeks to understand human behaviour, it can (likely) not read the human dendrite's signal directly or drink their brain fluid to gather this understanding. It (likely) has to interact with humans in their interaction patterns. How much time do you typically know a person before you trust her and report about your feelings, fears, dreams etc. - days? Or more like months or years? So, IMHO, this process can not be significantly accelerated by technology. Instincts not transmitted genetically - Hermeneutics tell me that it is impossible to proove that something do
  9. I did not say that, not in these nor other words. And by the way, I also do not support it - see my proposal to choose an animal species as a benchmark for AI progress. In principle, an AI species could learn everything that humans did, but they would need a similar time; then just economy-wise it is not probable. Seems like you want to hear this simplification because it is easy to argument against. About instincts, the theory that they are genetically based is not supported by genetics - AFAIK no instinct has been identified in any genome up to date. Always willing to learn.
  10. Sorry for the delay, I was ordered to empiric social research in Cologne carnival. Just a tiny bit of them, and I would expect AGI only to understand that tiny bit as well. Understanding not neccesarly in logic or scientific analysis, also the aforementioned (probably) intuitive things like interpersonal communication. If you want to build 'certain behaviour' into an AI, you would have to understand this - instinct. So (how) do we know the logic or content or message of instinct? To me, 'instinct' sounds like a term for 'some behaviour the sources of which we c
  11. Exactly. This wisdom has been formulated in the 'semiotic trinangle' concept: https://en.wikipedia.org/wiki/The_Meaning_of_Meaning Ok, so the AI species would also have to undergo their own evolution in order to gain instincts. Then again, it is barely probable that they attain (/acquire) the same instincts as humans - they could still not understand humans, in the same way that humans understand humans.
  12. Instinct, ok. How do creatures learn instinctive behaviour? Or how is it transfered to them? As long as the specific path of transfer is not understood, the idea of instincts in an AI is pure speculation, pseudoscience. BTW, I did study machine learning, and I did lectures on it, the first in 1989, and I talked to Marvin Minsky about it in Linz in 1990 (ars electronica). Little has changed since then. - Neural networks do work, no doubt, and this alone is still astonishing to most 'classical' programmers. But no system in the field of AI has reached even the level of a cockroach, ye
  13. It does not make it easier for the AI cause if you prefer to mix the terms in this way 😉 There are non-language interpersonal signals in human relations which rely on similar experiences: Joy, grief, fear, success, embarrassment, mobbing, humor, desparation, optimism etc. How do you imagine an AI to gather (or be trained on) these experiences, in order to be able to understand humans? Yes, this works. But what is the relation to human-human ( /-AI) interpersonal communication? Walking either works or not. As long as I fall - I have to try again, wisely with modified moveme
  14. For training you need some kind of media or language to transfer knowledge from trainer to trainee. As described in 'interpersonal communication' (WP), many signals are not expressed in words, and may not even be expressible in language. Experience does not need this. A (human / AI) 'intelligent' system can interpret such a signal out of the experience with the same situation.
  15. Let's assume humans learn them by experience. If experience was not neccessary, if this wisdom could be transfered completely by training - in what kind of media would you explain the meaning of a gesture to our AI? In written words?
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.