Jump to content


Senior Members
  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Prometheus

  1. This survey of researchers in the field gives a 50% chance of human level intelligence in ~40 years ago. It's probably the most robust estimate we're going to get.
  2. That'll explain why i didn't follow then.
  3. Not sure i follow with the DNA. We could still regard it as searching a solution space on geological timescales, optimising for replicability, What gives DNA its model - surely it is only shaped by natural selection? DNNs is a bit general, CNNs couldn't be a reinforcement agent could. All it needs is to be motivated in some sense to explore the input space and have some metric of its impact on the solution space. 1.) We have Generative Adversarial Networks where two or more networks cooperatively compete to improve some process, usually classification. There are aslo agents whose learning is nearly entirely with other AI - muzero was trained to play chess and other games by playing the existing best chess AI. 2.) There's a few ways AI currently learn, one way is supervised learning which requires a human to label some data - pointing to a picture and saying cat (maybe 3000 times but its the same idea). 3.) There are genetic algorithms, but i don't think that's what you mean. I don't see what, in principle, will stop us designing a robot able to build, perhaps imperfect, replicas of itself. Once it can, it is subject to evolution. Whether that's a desirable feature is another question. But here we're concerned with what a computer can and can't do, it doesn't necessarily need to replicate the functions of a human brain, just the behaviours. I'm interested, many people here seem to believe that there is nothing immaterial to the brain, no ghost in the machine, but still maintain that something other than a biological entity cannot be conscious . It seems to imply that substrate matters, that only biological substrates can manifest consciousness. If i'm interpreting that correctly, what is unique to the arrangement of atoms that manifest in the human that prevents it manifesting in other, inorganic, arrangements?
  4. The impression i get from the neuro and computer science community is that people think it is a computation because it's the only viable naturalistic/materialistic explanation on the table. Kind of like how abiogenesis is the only game in town regarding the origin of life - without supernatural appeals what else could it be? That said there are some who won't bet on it: And Penrose offers a quantum alternative:
  5. I thought that's what you meant when you mentioned the output space. Labelling data is basically a way of humans telling a model what something is i.e. labelling a cat picture as cat. If we are saying that neural networks can approximate any model, then all we need to do is have a way for the network to search that model space. Having it human directed is one (and perhaps the preferable) way, but you could have some kind of search and selection process - i.e. perturb the model every now and then, compare results to reality, prefer models that minimise this distance. Some networks already do something like this - having a 'curiosity' or 'exploration' parameter that controls how often the network should try something new. One network that was trained to play Duke Nukem ended up addicted to an in game TV - the TV was giving a constant stream of new data and it's curiosity parameter was jacked up to max.
  6. Learning to code is learning to troubleshoot. I suggest you go through each line of code separately and see if they're doing what you think they should do. I don't use r, but i can see 2 places this snippet of code fails.
  7. You don't need a package if you can specify the function yourself, which you did in the OP. Check out the link i embedded above - you just need to respecify the function.
  8. There isn't a single agreed upon definition of religion - it's a fuzzy and contested concept. Hence, everyone here could define religion in such a way that we are all right. My problem with most definitions of religion, and the one that dominates here regarding belief in the supernatural (usually god), is that it is a very Western centric perspective with roots in late 19th century anthropologists like E.B. Tyler. Our modern concept of religion is only about 2 centuries old, and has been formed by Protestants who saw god in every religion they studied - because when all you have is a hammer... Whetter Marxism is a religion - i don't know nearly enough about it, but i'd be surprised if there weren't some definitions of religion that included it.
  9. But it's more universal than that - those spaces need not be defined by a human. They could be defined by a selection process, similar how human ingenuity was shaped by natural selection. Already we have self-supervised models - where the agent labels data itself. Whether that is a desirable thing is another question. Then it would have made the same mistake humans did for thousands of years.
  10. But what is correct? Especially in the context of language models. Fill in the blank here: Red bull gives you _____. If you said wings, as did GPT-3, you'd be wrong according to a recent benchmark paper, which argues that models that are not factually correct are simply wrong. But this ignores a large function of language. Diabetes might be more factually correct, but it would also be appropriate because it's funny. This is particularly pertinent if we want AI to be creative. I'm not sure that's true given the universal approximation theorem of neural networks, which if i understand it correctly, states that a neural network can (at least theoretically) approximate any model.
  11. Ukraine claim it was hit by a Neptune missile which is one of their own latest developments. Ukraine has quite the expertise in rocketry so it wouldn't be a surprise.
  12. I've heard it expressed as AI interpolates, humans extrapolate. There's a well known paper in the field, On the measure of intelligence, that explores this idea quite thoroughly which stresses that intelligence should be measured by a broad class of problem solving, on problems agents have not seen before.
  13. If you define religious belief as irrational, of course it will be irrational - but that's just a tautology. The point is if there are some religious followers, even a minority, consistent with science then faith or supernatural beliefs it is not a defining feature. Therefore it can be rational for an atheist to believe in a religion - depending upon the religious tenants they believe.
  14. ID isn't a feature of most religions. Like I mentioned above some religions don't have a creation myth, and some explicitly refuse to answer cosmological questions such as this. Even something like karma, which literally means action or more generally consequences of action, can be easily understood in a naturalist framework - recourse to the supernatural is not a defining feature to the belief, even if it is a common one.
  15. I started to focus on creativity because of words like imaginative, intuition, experience in that passage. I think this is part of the problem - i know what you mean (or at least i think i do - shall we call it the human spark?), but when it comes to defining it precisely enough so it can be measured in a lab its virtually (ha) impossible. Which is why i think goalposts will be continually moved as AI pushes boundaries. Not sure, prohibit seems too austere, shall we say limit? If we think of creativity as a multi-faceted thing, then It's possible that different AIs could be more creative than humans in some ways and less creative in other ways. In terms of preset goals, we have mesa-optimisers in which the objective function itself is optimised. It has some AI researchers worried, because they believe it means an agent could develop its own goals distinct from what a human originally intended. There's also instrumental convergence (or maybe the above is also an example of this) in which 'soft' goals are learnt as a way for optimising a 'hard' goal: things like self-preservation are likely to emerge as soft goals because regardless of what an agent is trying to do, existing helps a great deal.
  16. I don't think she was doing any more than eyeballing the graph. It's maybe easiest to see at x=0, y seems to be about 1 at its max, and so too when y=0 x is about 1. Depends how much experience you have in R. Try following this example. Wolfram alpha might be better if easy is all you want.
  17. What, in principle, do you believe will prohibit creative AI? So far larger neural networks trained on more data haven't seen a plateau in performance which leads some to believe that sufficiently large networks will achieve human level ability. That seems consistent with the universal approximation theorem - assuming creativity is ultimately a computation. My personal guess, based on absolutely nothing, is that gradient based methods won't achieve it, and that some kind of evolutionary update will be required. I suspect we will continue to redefine creativity to mean whatever humans can do that AI can't. There are some people who would argue that AlphaGo is creative - apparently people who play Go describe the moves it has created as creative.
  18. That doesn't follow from the definition you provided. "generally relates... to supernatural... however, there is no scholarly consensus." "Traditionally, faith, in addition to reason, has been considered a source of religious beliefs." There are more than enough caveats in your provided definition of religion to have a religion that is entirely consistent with science.
  19. Well, although they aren't the norm , there are probabilistic frameworks for deep learning if it is desirable. The topic, as i understand it, is what computers can't, and will never be able to do. If we assume that there isn't anything supernatural in our wetware, surely it is only a matter of time before computers can at least recreate our creativity?
  20. You want a self-driving car that will do different things given the same inputs?
  21. Maybe a thread on the definition of religion would be helpful. Yes, cannot rationally concern itself. Rationality isn't the only trait we want as humans though, is it? But i think you are painting all religions based on the attitude of one religion. For instance Buddhism, Toaism and Bahá'í don't even have creation myths in their core teachings (although some communities have adopted various cultural myths), so how does the 'biased god of the gaps reasoning' fit them?
  22. In what sense is AlphaGo probabilistic? It searches a probability distribution, but depending on how the weights are initialised given identical inputs (although unlikely) it will give identical outputs. One shot learning architectures are being deployed so that metric is falling fas, at least for image and text classification. But is it relevant to your OP - as long as an agent can make 'high-level insights', does it matter that the learning regime is not like that for humans?
  23. I think Deep Blue had some kind of combinatorial tree search when it beat Kasparov back in the 90s, but it's not quite true of Alpha Go which beat Sedol at Go. Apparently Go has 10^172 possible positions - far too much for a full search. Instead certain branches are selected by a neural network - analogous to how a human might just work on a few likely looking branches in their head before making a move. This blog explains it pretty well.
  24. Belief in the supernatural is certainly not a necessary condition for a religion. It might be common, but it's not necessary. What supernatural things is it necessary for Taoists to believe? I've come across people who don't believe in god in the same way some people do believe in god: that is they don't really care, haven't at all thought about but are happy to go along with whatever the prevailing thought is in their culture. We might quibble and say it's not really a belief then, but when the census comes round, there's only place for a tick in one box. Belief is not a binary thing, humans are more complicated than that.
  25. I agree to an extent. I didn't mean imply that rationality has no value. Even in the context of religion i think it has great value - there are a certain class of problems, often involving the physical world, which religion simply need not concern itself. Science has it covered. In terms of rationality for communication - yes, but i don't think there is much that is rational in our works of art, poetry and stories. Not to say rationality isn't an important feature of many forms of communication, i just don't think it is, or should be, the whole story.
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.