Jump to content

Prometheus

Senior Members
  • Posts

    1889
  • Joined

  • Last visited

  • Days Won

    17

Prometheus last won the day on February 22 2022

Prometheus had the most liked content!

3 Followers

Profile Information

  • Interests
    Building statistical models for Raman spectroscopy.

Recent Profile Visitors

27074 profile views

Prometheus's Achievements

Primate

Primate (9/13)

643

Reputation

  1. Has anyone tried repeating the same question multiple times? If chatGPT works in a similar manner to GPT3 it's sampling from a distribution of possible tokens (not quite letters/punctuation) at every token. There's also a parameter, T, which allows the model to preferentially sample from the tails to give less likely answers.
  2. No, but our aspirations should be as big as the universe. The LHC costs roughly $4.5 billion a year. The global GDP is $85 trillion/year. The LHC represents 0.00005% of humanities annual wealth, or 0.0003% of the EU's annual GDP. A small price to pay to push at the borders of our ignorance.
  3. If he was accessing other 'processes' then he was not dealing with Lamda. If he has been giving information out about Google's inner workings I'm not surprised he had to leave, I'm sure he violated many agreements he made when signing up with them. But given what he believed about the AI, he did the right thing. I don't know anything more about him than that.
  4. It's not an analogous situation for (at least) 2 reasons. Someone without any senses other than auditory are still not only 'trained' on words, as words only form part of our auditory experience. Nor does Lambda have any auditory inputs, including words. The text is fed into the model as tokens (not quite the same as words, but close). The human brain/body is a system known, in the most intimate sense, to produce consciousness. Hence, we are readily willing to extend the notion of consciousness to other people, notwithstanding edge cases such as brain-stem death. I suspect a human brought up truly only on a single sensory type would not develop far past birth (remembering the 5 senses model was put forward by Aristotle and far under-estimates the true number).
  5. If you skip the click bait videos and go to the actual publication (currently available in pre-print) you'll see exactly what lamda has been trained on: 1.56 trillion words. Just text, 90% of it English. Level 17 and level 32.
  6. The entire universe exposed to LaMDA is text. Is doesn't even have pictures to associate to those words, and has no sensory inputs . By claiming LaMDA, or any similar language model, has consciousness, is to claim that language alone is a sufficient condition for consciousness. Investigating the truth of that implicit claim gives us another avenue to explore.
  7. LaMDA is a language model designed for customer interaction. The google employee was a prompt engineer tasked with fine-tuning the model to be suitable for these interactions, because out of the box and unguided it could drift towards anything in its training corpus (i.g. it could favour language seen in erotica novels, which may not be what google want - depending on exactly what they're selling). Part of its training corpus would have included sci-fi books, some of which would include our imagined interactions with AI. It seems the engineer steered the AI towards these tendencies by asking leading questions.
  8. Dunno, but the PI of that nature paper is very active on twitter: he came up with the idea and would probably answer your question.
  9. Assembly theory posits that complex molecules found in large abundance are (almost surely) universal biosignatures. From their publication: https://www.nature.com/articles/s41467-021-23258-x At the moment it only has proof of concept with mass spectrometry, but it's a general theory of complexity so could work with other forms of spectroscopy. Interesting direction anyway.
  10. It was unknown whether the plants would germinate at all - the fact they did tells us that regolith did not interfere with the hormones necessary for this process. The plant they chose was the first one to have its genome sequenced, allowing them to look into the transcriptome to identify epigenetic changes due to the regolith, particularly what stress responses were triggered. They also compared regolith from 3 different lunar sites, allowing them to identify differences in morphology, transcriptomes etc between sites. Full paper here: https://www.nature.com/articles/s42003-022-03334-8
  11. Sounds like you're describing panpsychism. There's a philosopher called Philip Goff who articulates this view quite well.
  12. Some people have tried to develop methods of measuring consciousness in the most general sense. I think the most developed idea is integrated information theory put forward by a neurologist in 2004. It measures how integrated various systems in a whole are. Even if you accept this as a reasonable measure, to actually apply the test all possible combinations of connectivity are sought, so to 'measure' the consciousness of a worm with 300 synapses would currently take 10^9 years.
  13. So a matter of complexity? Fair enough. Thanks for answering so clearly - i ask this question a lot, not just here, and rarely get such a clear answer. Not any closer? There are some in the community who believe that current DNNs will be enough - it's just a matter of having a large enough network and suitable training regime. Yann Lecun is probably the most famous, the guy who invented CNNs. Then there are many who believe that symbolic representations need to be engineered directly into AI systems. Gary Marcus is probably the biggest advocate for this. Here's a 2 hour debate between them: There are a number of neuroscientists using AI as a model of the brain. There are some interesting papers that argue what some networks are doing is at least correlated with certain visual centres of the brain - this interview with a neuroscientist details some of that research - around 30 mins in, although the whole interview might be of interest to you: An interesting decision by Tesla was to use vision only based inputs - as opposed to competitors who use multi-modal inputs and combine visual with lidar and other data. Tesla did this because their series of networks were getting confused as the data streams sometimes gave apparently contradictory inputs - analogous to when humans get dizzy when their inner tells them one thing about motion and the eyes another thing. Things like that make me believe that current architecture are capturing some facets of whatever is going on in the brain, even if its still missing alot, so i think they do bring us closer.
  14. If you're going to ask someone to guess when fusion is going be reality, you'd still give more credence to engineers and physicists guess than some random people on the internet wouldn't you?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.