Jump to content

Genady

Senior Members
  • Joined

Everything posted by Genady

  1. Genady replied to Genady's topic in Science News
    Yes, but I think (1) time was not compressed, (2) IIRC, the time when it was compressed to a nucleus size was before inflation and then even the unified force did not exist yet; it supposedly appeared in decay of inflation field, when the universe was of a size of a marble.
  2. Going back to the OP topic, I think it belongs rather to biology than philosophy forum.
  3. There are MRI and other similar methods; many brains are already well known and reexamining of the existing images might be sufficient. I don't know why would they look for more.
  4. Where did I say that I would do any of this?
  5. I'm sorry, but I still don't know what your question is. Can you just ask it in a straight form, please?
  6. You're right, perhaps consciousness is different. More fundamental and, perhaps, more objective. For example, it may be an ability of brain to take some of its own processes as input, while brains without consciousness process only inputs that arrive from elsewhere. In such case, we might eventually find out what brain structures provide this ability and then could look for similar structures in other creatures.
  7. Genady posted a topic in Science News
    Let me be the first to announce the birth of a new science. Lee Smolin et al. explain it in a new paper, Biocosmology: Towards the birth of a new science.
  8. The impact is obvious on this small island: the measures up - in 1-2 weeks the numbers down, the measures down - in 1-2 weeks the numbers up.
  9. Many thanks. +1
  10. Remember our discussion about free will a couple of months ago? My resolution is the same: Just different reference frames.
  11. This crawling neutrophil appears to be consciously chasing that bacterium:
  12. I've thought of a test for understanding human speech by an AI system: give it a short story and ask questions which require an interpretation of the story rather than finding an answer in it. For example*, consider this human conversation: Carol: Are you coming to the party tonight? Lara: I’ve got an exam tomorrow. On the face of it, Lara’s statement is not an answer to Carol’s question. Lara doesn’t say Yes or No. Yet Carol will interpret the statement as meaning “No” or “Probably not.” Carol can work out that “exam tomorrow” involves “study tonight,” and “study tonight” precludes “party tonight.” Thus, Lara’s response is not just a statement about tomorrow’s activities, it contains an answer and a reasoning concerning tonight’s activities. To see if an AI system understands it, ask for example: Is Lara's reply an answer to Carol's question? Is Lara going to the party tonight, Yes or No? etc. I didn't see this kind of test in natural language processing systems. If anyone knows something similar, please let me know. *This example is from Yule, George. The Study of Language, 2020.
  13. It seems to me that the question shifts then to, "what constitutes a thing"?
  14. We cannot explain to other humans the meaning of finite numbers either. How do you explain the meaning of "two"?
  15. I don't know where it is , but I've heard it many time from mods: "Rule 2.7 requires the discussion to take place here ("material for discussion must be posted")"
  16. Is such a test needed? Isn't everything conscious?
  17. I remember, I had it, too. Except it was called something else. Don't remember what, but it was in Cyrillic Metal parts looked exactly the same, but the architectural elements that look plastic here, were wooden pieces in my case. Even better look and feel that way. My mother was an architect and my father was a construction engineer - they made sure I got such stuff...
  18. On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? (2204.07931.pdf (arxiv.org)) In knowledge-based conversational AI systems, "hallucinations" are responses which are factually invalid, fully or partially. It appears that AI does it a lot. This study investigated where these hallucinations come from. As it turns out, the big source is in the databases used to train these AI systems. On average, the responses, on which the systems are trained, contain about 20% factual information, while the rest is hallucinations (~65%), uncooperative (~5%), or uninformative (~10%). On top of this, it turns out that the systems themselves amplify hallucinations to about 70%, while reducing factual information to about 11%, increasing uncooperative responses to about 12%, and reducing uninformative ones to about 7%. They are getting really human-like, evidently...
  19. OK, it might constitute a part of the solution. Like hair is a part of dog.
  20. I don't think that a substrate matters in principle, although it might matter for implementation. I think intelligence can be artificial. But I think that we are nowhere near it, and that current AI with its current machine learning engine does not bring us any closer to it.
  21. Unless all these programs are already installed in the same computer.
  22. Yes, this is a known concern.
  23. But I didn't say, DNA.
  24. I think I can program in random replication errors. Maybe I don't understand what you mean here

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.