Jump to content

Prometheus

Senior Members
  • Posts

    1889
  • Joined

  • Last visited

  • Days Won

    17

Posts posted by Prometheus

  1. Has anyone tried repeating the same question multiple times? If chatGPT works in a similar manner to GPT3 it's sampling from a distribution of possible tokens (not quite letters/punctuation) at every token. There's also a parameter, T, which allows the model to preferentially sample from the tails to give less likely answers.

  2. 3 hours ago, thewowsignal said:

    Do you think your capabilities are as big as the Universe?

    No, but our aspirations should be as big as the universe.

    The LHC costs roughly $4.5 billion a year. The global GDP is $85 trillion/year. The LHC represents 0.00005% of humanities annual wealth, or 0.0003% of the EU's annual GDP. A small price to pay to push at the borders of our ignorance. 

  3. 10 hours ago, moreno7798 said:

    As stated by Blake Lemoine, he was not having a conversation with just the text chatbot, he was accessing the system for generating chatbots, which by his words is a system that processes all of google's data acquisition assets including vision, audio, language and all of the internet data assets available to google. What do you make of Blake Lemoine? 

    If he was accessing other 'processes' then he was not dealing with Lamda. 

    If he has been giving information out about Google's inner workings I'm not surprised he had to leave, I'm sure he violated many agreements he made when signing up with them. But given what he believed about the AI, he did the right thing. I don't know anything more about him than that. 

  4. 9 hours ago, moreno7798 said:

    It begs the question; Is a person that is born blind and paralized without sense of touch from the neck down not trained on words? And would that desqualify them from being sentience?

    It's not an analogous situation for (at least) 2 reasons.

    Someone without any senses other than auditory are still not only 'trained' on words, as words only form part of our auditory experience. Nor does Lambda have any auditory inputs, including words. The text is fed into the model as tokens (not quite the same as words, but close).

    The human brain/body is a system known, in the most intimate sense, to produce consciousness. Hence, we are readily willing to extend the notion of consciousness to other people, notwithstanding edge cases such as brain-stem death.

    I suspect a human brought up truly only on a single sensory type would not develop far past birth (remembering the 5 senses model was put forward by Aristotle and far under-estimates the true number).

  5. On 7/30/2022 at 4:10 PM, moreno7798 said:

    That appears to be incorrect. Blake Lemoine has stated that LaMDA is NOT just a test based chatbot, it is trained on the entirety of google's data acquisition assets. Watch the video below:

    If you skip the click bait videos and go to the actual publication (currently available in pre-print) you'll see exactly what lamda has been trained on: 1.56 trillion words. Just text, 90% of it English.

     

    On 7/25/2022 at 1:36 PM, dimreepr said:

    Good point, what level of comunication, with our universe, is required for sentience to emerge? 

    And what level of communication is required for us to recognise a fellow sentient?

    Level 17 and level 32.

  6. On 7/18/2022 at 1:52 PM, dimreepr said:

    Indeed, how would we know?

    The entire universe exposed to LaMDA is text. Is doesn't even have pictures to associate to those words, and has no sensory inputs . By claiming LaMDA, or any similar language model, has consciousness, is to claim that language alone is a sufficient condition for consciousness.  Investigating the truth of that implicit claim gives us another avenue to explore.

  7. On 7/14/2022 at 8:44 PM, moreno7798 said:

    What do you guys think is happening with LaMDA?

    LaMDA is a language model designed for customer interaction. The google employee was a prompt engineer tasked with fine-tuning the model to be suitable for these interactions, because out of the box and unguided it could drift towards anything in its training corpus (i.g. it could favour language seen in erotica novels, which may not be what google want - depending on exactly what they're selling).

    Part of its training corpus would have included sci-fi books, some of which would include our imagined interactions with AI. It seems the engineer steered the AI towards these tendencies by asking leading questions. 

  8. On 5/13/2022 at 1:21 PM, Genady said:

    I read this news and couldn't understand what was so astonishing, what did they expect, what new knowledge have they obtained...

    It was unknown whether the plants would germinate at all - the fact they did tells us that regolith did not interfere with the hormones necessary for this process. The plant they chose was the first one to have its genome sequenced, allowing them to look into the transcriptome to identify epigenetic changes due to the regolith, particularly what stress responses were triggered.

    They also compared regolith from 3 different lunar sites, allowing them to identify differences in morphology, transcriptomes etc between sites.

    Full paper here: https://www.nature.com/articles/s42003-022-03334-8

  9. Some people have tried to develop methods of measuring consciousness in the most general sense. I think the most developed idea is integrated information theory put forward by a neurologist in 2004. It measures how integrated various systems in a whole are. Even if you accept this as a reasonable measure, to actually apply the test all possible combinations of connectivity are sought, so to 'measure' the consciousness of a worm with 300 synapses would currently take 10^9 years.

  10. 10 hours ago, Genady said:

    I don't think that a substrate matters in principle, although it might matter for implementation. I think intelligence can be artificial. But I think that we are nowhere near it, and that current AI with its current machine learning engine does not bring us any closer to it.

    So a matter of complexity? Fair enough. Thanks for answering so clearly - i ask this question a lot, not just here, and rarely get such a clear answer.

     

    10 hours ago, Genady said:

    But I think that we are nowhere near it, and that current AI with its current machine learning engine does not bring us any closer to it.

    Not any closer?

    There are some in the community who believe that current DNNs will be enough - it's just a matter of having a large enough network and suitable training regime. Yann Lecun is probably the most famous, the guy who invented CNNs.

    Then there are many who believe that symbolic representations need to be engineered directly into AI systems. Gary Marcus is probably the biggest advocate for this.

    Here's a 2 hour debate between them:

     

    There are a number of neuroscientists using AI as a model of the brain. There are some interesting papers that argue what some networks are doing is at least correlated with certain visual centres of the brain - this interview with a neuroscientist details some of that research - around 30 mins in, although the whole interview might be of interest to you:

     

    An interesting decision by Tesla was to use vision only based inputs - as opposed to competitors who use multi-modal inputs and combine visual with lidar and other data. Tesla did this because their series of networks were getting confused as the data streams sometimes gave apparently contradictory inputs - analogous to when humans get dizzy when their inner tells them one thing about motion and the eyes another thing.

    Things like that make me believe that current architecture are capturing some facets of whatever is going on in the brain, even if its still missing alot, so i think they do bring us closer.

  11. 2 minutes ago, studiot said:

    Pure guesswork, no better than the "we will have fusion within 20 years" guess of the 1950s.

    Surely we are talking about now  ?

    If you're going to ask someone to guess when fusion is going be reality, you'd still give more credence to engineers and physicists guess than some random people on the internet wouldn't you?

  12. On 4/17/2022 at 12:21 PM, Genady said:

    DNN can approximate any function in a model, but the model is given to it.

    What I mean by coming up with a new model, in the astrology example for instance, is: would DNN come up with considering, instead of astrological data, parameters like education of a person, social background, family psychological profile, how much they travel, what do they do, how big is their town, is it rural or industrial, conservative or liberal, etc. etc. ?

    Not sure i follow with the DNA. We could still regard it as searching a solution space on geological timescales, optimising for replicability, What gives DNA its model - surely it is only shaped by natural selection?

    DNNs is a bit general, CNNs couldn't be a reinforcement agent could. All it needs is to be motivated in some sense to explore the input space and have some metric of its impact on the solution space.

     

    On 4/17/2022 at 12:23 PM, studiot said:

    Some further rambling thoughts comparing development of AI/Computers and Humans.

    1. From earliest times humans developed the concept and implementation of teamwork. that is how they would have taken down a wooly mammoth for instance.
      Obviously that required lots of humans.
      On the other hand computers are not (as far as I know) developed in the environment of lots of cooperating computers/AIs.
       
    2. Humans learn and grow up under the guidance of older humans, thus using one way of passing on skills and knowledge.
      This might even have led to the concept of a guiding being and religion.
      Computers don't follow this route, so could a computer develop such a concept ?
       
    3. Humans have the advantage of other living beings of being able to participate in evolution of a species by reproduction passing on gene.
      Again as far as I know this is unavailbale to computers.

    1.) We have Generative Adversarial Networks where two or more networks cooperatively compete to improve some process, usually classification. There are aslo agents whose learning is nearly entirely with other AI - muzero was trained to play chess and other games by playing the existing best chess AI.

    2.) There's a few ways AI currently learn, one way is supervised learning which requires a human to label some data - pointing to a picture and saying cat (maybe 3000 times but its the same idea).

    3.) There are genetic algorithms, but i don't think that's what you mean. I don't see what, in principle, will stop us designing a robot able to build, perhaps imperfect, replicas of itself. Once it can, it is subject to evolution. Whether that's a desirable feature is another question.

     

    15 minutes ago, TheVat said:

     if we develop an AGI that replicates all the functions of a human brain...

    But here we're concerned with what a computer can and can't do, it doesn't necessarily need to replicate the functions of a human brain, just the behaviours. 

    I'm interested, many people here seem to believe that there is nothing immaterial to the brain, no ghost in the machine, but still maintain that something other than a biological entity cannot be conscious . It seems to imply that substrate matters, that only biological substrates can manifest consciousness. If i'm interpreting that correctly, what is unique to the arrangement of atoms that manifest in the human that prevents it manifesting in other, inorganic, arrangements?

  13. The impression i get from the neuro and computer science community is that people think it is a computation because it's the only viable naturalistic/materialistic explanation on the table. Kind of like how abiogenesis is the only game in town regarding the origin of life - without supernatural appeals what else could it be?

    That said there are some who won't bet on it:

     

    And Penrose offers a quantum alternative:

     

  14. On 4/15/2022 at 11:00 AM, Genady said:

    I don't know what "labeling data" is and how it relates to coming up with a new model.

    I thought that's what you meant when you mentioned the output space. Labelling data is basically a way of humans telling a model what something is i.e. labelling a cat picture as cat.

     

    On 4/15/2022 at 11:00 AM, Genady said:

    But will it come up with a new model?

    If we are saying that neural networks can approximate any model, then all we need to do is have a way for the network to search that model space. Having it human directed is one (and perhaps the preferable) way, but you could have some kind of search and selection process - i.e. perturb the model every now and then, compare results to reality, prefer models that minimise this distance.

    Some networks already do something like this - having a 'curiosity' or 'exploration' parameter that controls how often the network should try something new. One network that was trained to play Duke Nukem ended up addicted to an in game TV - the TV was giving a constant stream of new data and it's curiosity parameter was jacked up to max.

  15. On 4/15/2022 at 1:29 PM, Dhamnekar Win,odd said:

    What is this? I don't get it.😕🤔😧

    Learning to code is learning to troubleshoot. I suggest you go through each line of code separately and see if they're doing what you think they should do. I don't use r, but i can see 2 places this snippet of code fails.

  16. On 4/13/2022 at 4:29 PM, Genady said:

    This is so, if my understanding of what constitutes a religion is thrown away. What is a defining feature then? How do I know what is and what is not a religion? Can we apply that test to Marxism (because I know about it more than I ever wanted)?

    There isn't a single agreed upon definition of religion - it's a fuzzy and contested concept. Hence, everyone here could define religion in such a way that we are all right.

    My problem with most definitions of religion, and the one that dominates here regarding belief in the supernatural (usually god), is that it is a very Western centric perspective with roots in late 19th century anthropologists like E.B. Tyler. Our modern concept of religion is only about 2 centuries old, and has been formed by Protestants who saw god in every religion they studied - because when all you have is a hammer...

    Whetter Marxism is a religion - i don't know nearly enough about it, but i'd be surprised if there weren't some definitions of religion that included it.

  17. 17 hours ago, Genady said:

    As I understand it, DNN can approximate any function in a given model, i.e. given input and output spaces. What are these spaces, is up to human.

    But it's more universal than that - those spaces need not be defined by a human. They could be defined by a selection process, similar how human ingenuity was shaped by natural selection. Already we have self-supervised models - where the agent labels data itself. Whether that is a desirable thing is another question.

    13 hours ago, Genady said:

    What will DNN do if the training data is astrological data of people as input and their life affairs as output? It can approximate any function in this model. But any function will be garbage anyway.

    Then it would have made the same mistake humans did for thousands of years.

  18. 3 hours ago, studiot said:

    But it needs to develop its response to be correct !

    Humans don't seem bothered by this difficulty.

    But what is correct? Especially in the context of language models.

    Fill in the blank here:

    Red bull gives you _____.

    If you said wings, as did GPT-3, you'd be wrong according to a recent benchmark paper, which argues that models that are not factually correct are simply wrong.

    But this ignores a large function of language. Diabetes might be more factually correct, but it would also be appropriate because it's funny. This is particularly pertinent if we want AI to be creative.

     

    21 minutes ago, Genady said:

    I try to narrow the original passage to one item: AI can't come up with new models, it can only optimize models created by humans.

    I'm not sure that's true given the universal approximation theorem of neural networks, which if i understand it correctly, states that a neural network can (at least theoretically) approximate any model.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.