Jump to content

Prometheus

Senior Members
  • Posts

    1881
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by Prometheus

  1. Dunno, but the PI of that nature paper is very active on twitter: he came up with the idea and would probably answer your question.
  2. Assembly theory posits that complex molecules found in large abundance are (almost surely) universal biosignatures. From their publication: https://www.nature.com/articles/s41467-021-23258-x At the moment it only has proof of concept with mass spectrometry, but it's a general theory of complexity so could work with other forms of spectroscopy. Interesting direction anyway.
  3. It was unknown whether the plants would germinate at all - the fact they did tells us that regolith did not interfere with the hormones necessary for this process. The plant they chose was the first one to have its genome sequenced, allowing them to look into the transcriptome to identify epigenetic changes due to the regolith, particularly what stress responses were triggered. They also compared regolith from 3 different lunar sites, allowing them to identify differences in morphology, transcriptomes etc between sites. Full paper here: https://www.nature.com/articles/s42003-022-03334-8
  4. Sounds like you're describing panpsychism. There's a philosopher called Philip Goff who articulates this view quite well.
  5. Some people have tried to develop methods of measuring consciousness in the most general sense. I think the most developed idea is integrated information theory put forward by a neurologist in 2004. It measures how integrated various systems in a whole are. Even if you accept this as a reasonable measure, to actually apply the test all possible combinations of connectivity are sought, so to 'measure' the consciousness of a worm with 300 synapses would currently take 10^9 years.
  6. So a matter of complexity? Fair enough. Thanks for answering so clearly - i ask this question a lot, not just here, and rarely get such a clear answer. Not any closer? There are some in the community who believe that current DNNs will be enough - it's just a matter of having a large enough network and suitable training regime. Yann Lecun is probably the most famous, the guy who invented CNNs. Then there are many who believe that symbolic representations need to be engineered directly into AI systems. Gary Marcus is probably the biggest advocate for this. Here's a 2 hour debate between them: There are a number of neuroscientists using AI as a model of the brain. There are some interesting papers that argue what some networks are doing is at least correlated with certain visual centres of the brain - this interview with a neuroscientist details some of that research - around 30 mins in, although the whole interview might be of interest to you: An interesting decision by Tesla was to use vision only based inputs - as opposed to competitors who use multi-modal inputs and combine visual with lidar and other data. Tesla did this because their series of networks were getting confused as the data streams sometimes gave apparently contradictory inputs - analogous to when humans get dizzy when their inner tells them one thing about motion and the eyes another thing. Things like that make me believe that current architecture are capturing some facets of whatever is going on in the brain, even if its still missing alot, so i think they do bring us closer.
  7. If you're going to ask someone to guess when fusion is going be reality, you'd still give more credence to engineers and physicists guess than some random people on the internet wouldn't you?
  8. This survey of researchers in the field gives a 50% chance of human level intelligence in ~40 years ago. It's probably the most robust estimate we're going to get.
  9. That'll explain why i didn't follow then.
  10. Not sure i follow with the DNA. We could still regard it as searching a solution space on geological timescales, optimising for replicability, What gives DNA its model - surely it is only shaped by natural selection? DNNs is a bit general, CNNs couldn't be a reinforcement agent could. All it needs is to be motivated in some sense to explore the input space and have some metric of its impact on the solution space. 1.) We have Generative Adversarial Networks where two or more networks cooperatively compete to improve some process, usually classification. There are aslo agents whose learning is nearly entirely with other AI - muzero was trained to play chess and other games by playing the existing best chess AI. 2.) There's a few ways AI currently learn, one way is supervised learning which requires a human to label some data - pointing to a picture and saying cat (maybe 3000 times but its the same idea). 3.) There are genetic algorithms, but i don't think that's what you mean. I don't see what, in principle, will stop us designing a robot able to build, perhaps imperfect, replicas of itself. Once it can, it is subject to evolution. Whether that's a desirable feature is another question. But here we're concerned with what a computer can and can't do, it doesn't necessarily need to replicate the functions of a human brain, just the behaviours. I'm interested, many people here seem to believe that there is nothing immaterial to the brain, no ghost in the machine, but still maintain that something other than a biological entity cannot be conscious . It seems to imply that substrate matters, that only biological substrates can manifest consciousness. If i'm interpreting that correctly, what is unique to the arrangement of atoms that manifest in the human that prevents it manifesting in other, inorganic, arrangements?
  11. The impression i get from the neuro and computer science community is that people think it is a computation because it's the only viable naturalistic/materialistic explanation on the table. Kind of like how abiogenesis is the only game in town regarding the origin of life - without supernatural appeals what else could it be? That said there are some who won't bet on it: And Penrose offers a quantum alternative:
  12. I thought that's what you meant when you mentioned the output space. Labelling data is basically a way of humans telling a model what something is i.e. labelling a cat picture as cat. If we are saying that neural networks can approximate any model, then all we need to do is have a way for the network to search that model space. Having it human directed is one (and perhaps the preferable) way, but you could have some kind of search and selection process - i.e. perturb the model every now and then, compare results to reality, prefer models that minimise this distance. Some networks already do something like this - having a 'curiosity' or 'exploration' parameter that controls how often the network should try something new. One network that was trained to play Duke Nukem ended up addicted to an in game TV - the TV was giving a constant stream of new data and it's curiosity parameter was jacked up to max.
  13. Learning to code is learning to troubleshoot. I suggest you go through each line of code separately and see if they're doing what you think they should do. I don't use r, but i can see 2 places this snippet of code fails.
  14. You don't need a package if you can specify the function yourself, which you did in the OP. Check out the link i embedded above - you just need to respecify the function.
  15. There isn't a single agreed upon definition of religion - it's a fuzzy and contested concept. Hence, everyone here could define religion in such a way that we are all right. My problem with most definitions of religion, and the one that dominates here regarding belief in the supernatural (usually god), is that it is a very Western centric perspective with roots in late 19th century anthropologists like E.B. Tyler. Our modern concept of religion is only about 2 centuries old, and has been formed by Protestants who saw god in every religion they studied - because when all you have is a hammer... Whetter Marxism is a religion - i don't know nearly enough about it, but i'd be surprised if there weren't some definitions of religion that included it.
  16. But it's more universal than that - those spaces need not be defined by a human. They could be defined by a selection process, similar how human ingenuity was shaped by natural selection. Already we have self-supervised models - where the agent labels data itself. Whether that is a desirable thing is another question. Then it would have made the same mistake humans did for thousands of years.
  17. But what is correct? Especially in the context of language models. Fill in the blank here: Red bull gives you _____. If you said wings, as did GPT-3, you'd be wrong according to a recent benchmark paper, which argues that models that are not factually correct are simply wrong. But this ignores a large function of language. Diabetes might be more factually correct, but it would also be appropriate because it's funny. This is particularly pertinent if we want AI to be creative. I'm not sure that's true given the universal approximation theorem of neural networks, which if i understand it correctly, states that a neural network can (at least theoretically) approximate any model.
  18. Ukraine claim it was hit by a Neptune missile which is one of their own latest developments. Ukraine has quite the expertise in rocketry so it wouldn't be a surprise.
  19. I've heard it expressed as AI interpolates, humans extrapolate. There's a well known paper in the field, On the measure of intelligence, that explores this idea quite thoroughly which stresses that intelligence should be measured by a broad class of problem solving, on problems agents have not seen before.
  20. If you define religious belief as irrational, of course it will be irrational - but that's just a tautology. The point is if there are some religious followers, even a minority, consistent with science then faith or supernatural beliefs it is not a defining feature. Therefore it can be rational for an atheist to believe in a religion - depending upon the religious tenants they believe.
  21. ID isn't a feature of most religions. Like I mentioned above some religions don't have a creation myth, and some explicitly refuse to answer cosmological questions such as this. Even something like karma, which literally means action or more generally consequences of action, can be easily understood in a naturalist framework - recourse to the supernatural is not a defining feature to the belief, even if it is a common one.
  22. I started to focus on creativity because of words like imaginative, intuition, experience in that passage. I think this is part of the problem - i know what you mean (or at least i think i do - shall we call it the human spark?), but when it comes to defining it precisely enough so it can be measured in a lab its virtually (ha) impossible. Which is why i think goalposts will be continually moved as AI pushes boundaries. Not sure, prohibit seems too austere, shall we say limit? If we think of creativity as a multi-faceted thing, then It's possible that different AIs could be more creative than humans in some ways and less creative in other ways. In terms of preset goals, we have mesa-optimisers in which the objective function itself is optimised. It has some AI researchers worried, because they believe it means an agent could develop its own goals distinct from what a human originally intended. There's also instrumental convergence (or maybe the above is also an example of this) in which 'soft' goals are learnt as a way for optimising a 'hard' goal: things like self-preservation are likely to emerge as soft goals because regardless of what an agent is trying to do, existing helps a great deal.
  23. I don't think she was doing any more than eyeballing the graph. It's maybe easiest to see at x=0, y seems to be about 1 at its max, and so too when y=0 x is about 1. Depends how much experience you have in R. Try following this example. Wolfram alpha might be better if easy is all you want.
  24. What, in principle, do you believe will prohibit creative AI? So far larger neural networks trained on more data haven't seen a plateau in performance which leads some to believe that sufficiently large networks will achieve human level ability. That seems consistent with the universal approximation theorem - assuming creativity is ultimately a computation. My personal guess, based on absolutely nothing, is that gradient based methods won't achieve it, and that some kind of evolutionary update will be required. I suspect we will continue to redefine creativity to mean whatever humans can do that AI can't. There are some people who would argue that AlphaGo is creative - apparently people who play Go describe the moves it has created as creative.
  25. That doesn't follow from the definition you provided. "generally relates... to supernatural... however, there is no scholarly consensus." "Traditionally, faith, in addition to reason, has been considered a source of religious beliefs." There are more than enough caveats in your provided definition of religion to have a religion that is entirely consistent with science.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.