Jump to content

Prometheus

Senior Members
  • Posts

    1898
  • Joined

  • Last visited

  • Days Won

    17

Posts posted by Prometheus

  1. On 4/17/2022 at 12:21 PM, Genady said:

    DNN can approximate any function in a model, but the model is given to it.

    What I mean by coming up with a new model, in the astrology example for instance, is: would DNN come up with considering, instead of astrological data, parameters like education of a person, social background, family psychological profile, how much they travel, what do they do, how big is their town, is it rural or industrial, conservative or liberal, etc. etc. ?

    Not sure i follow with the DNA. We could still regard it as searching a solution space on geological timescales, optimising for replicability, What gives DNA its model - surely it is only shaped by natural selection?

    DNNs is a bit general, CNNs couldn't be a reinforcement agent could. All it needs is to be motivated in some sense to explore the input space and have some metric of its impact on the solution space.

     

    On 4/17/2022 at 12:23 PM, studiot said:

    Some further rambling thoughts comparing development of AI/Computers and Humans.

    1. From earliest times humans developed the concept and implementation of teamwork. that is how they would have taken down a wooly mammoth for instance.
      Obviously that required lots of humans.
      On the other hand computers are not (as far as I know) developed in the environment of lots of cooperating computers/AIs.
       
    2. Humans learn and grow up under the guidance of older humans, thus using one way of passing on skills and knowledge.
      This might even have led to the concept of a guiding being and religion.
      Computers don't follow this route, so could a computer develop such a concept ?
       
    3. Humans have the advantage of other living beings of being able to participate in evolution of a species by reproduction passing on gene.
      Again as far as I know this is unavailbale to computers.

    1.) We have Generative Adversarial Networks where two or more networks cooperatively compete to improve some process, usually classification. There are aslo agents whose learning is nearly entirely with other AI - muzero was trained to play chess and other games by playing the existing best chess AI.

    2.) There's a few ways AI currently learn, one way is supervised learning which requires a human to label some data - pointing to a picture and saying cat (maybe 3000 times but its the same idea).

    3.) There are genetic algorithms, but i don't think that's what you mean. I don't see what, in principle, will stop us designing a robot able to build, perhaps imperfect, replicas of itself. Once it can, it is subject to evolution. Whether that's a desirable feature is another question.

     

    15 minutes ago, TheVat said:

     if we develop an AGI that replicates all the functions of a human brain...

    But here we're concerned with what a computer can and can't do, it doesn't necessarily need to replicate the functions of a human brain, just the behaviours. 

    I'm interested, many people here seem to believe that there is nothing immaterial to the brain, no ghost in the machine, but still maintain that something other than a biological entity cannot be conscious . It seems to imply that substrate matters, that only biological substrates can manifest consciousness. If i'm interpreting that correctly, what is unique to the arrangement of atoms that manifest in the human that prevents it manifesting in other, inorganic, arrangements?

  2. The impression i get from the neuro and computer science community is that people think it is a computation because it's the only viable naturalistic/materialistic explanation on the table. Kind of like how abiogenesis is the only game in town regarding the origin of life - without supernatural appeals what else could it be?

    That said there are some who won't bet on it:

     

    And Penrose offers a quantum alternative:

     

  3. On 4/15/2022 at 11:00 AM, Genady said:

    I don't know what "labeling data" is and how it relates to coming up with a new model.

    I thought that's what you meant when you mentioned the output space. Labelling data is basically a way of humans telling a model what something is i.e. labelling a cat picture as cat.

     

    On 4/15/2022 at 11:00 AM, Genady said:

    But will it come up with a new model?

    If we are saying that neural networks can approximate any model, then all we need to do is have a way for the network to search that model space. Having it human directed is one (and perhaps the preferable) way, but you could have some kind of search and selection process - i.e. perturb the model every now and then, compare results to reality, prefer models that minimise this distance.

    Some networks already do something like this - having a 'curiosity' or 'exploration' parameter that controls how often the network should try something new. One network that was trained to play Duke Nukem ended up addicted to an in game TV - the TV was giving a constant stream of new data and it's curiosity parameter was jacked up to max.

  4. On 4/15/2022 at 1:29 PM, Dhamnekar Win,odd said:

    What is this? I don't get it.😕🤔😧

    Learning to code is learning to troubleshoot. I suggest you go through each line of code separately and see if they're doing what you think they should do. I don't use r, but i can see 2 places this snippet of code fails.

  5. On 4/13/2022 at 4:29 PM, Genady said:

    This is so, if my understanding of what constitutes a religion is thrown away. What is a defining feature then? How do I know what is and what is not a religion? Can we apply that test to Marxism (because I know about it more than I ever wanted)?

    There isn't a single agreed upon definition of religion - it's a fuzzy and contested concept. Hence, everyone here could define religion in such a way that we are all right.

    My problem with most definitions of religion, and the one that dominates here regarding belief in the supernatural (usually god), is that it is a very Western centric perspective with roots in late 19th century anthropologists like E.B. Tyler. Our modern concept of religion is only about 2 centuries old, and has been formed by Protestants who saw god in every religion they studied - because when all you have is a hammer...

    Whetter Marxism is a religion - i don't know nearly enough about it, but i'd be surprised if there weren't some definitions of religion that included it.

  6. 17 hours ago, Genady said:

    As I understand it, DNN can approximate any function in a given model, i.e. given input and output spaces. What are these spaces, is up to human.

    But it's more universal than that - those spaces need not be defined by a human. They could be defined by a selection process, similar how human ingenuity was shaped by natural selection. Already we have self-supervised models - where the agent labels data itself. Whether that is a desirable thing is another question.

    13 hours ago, Genady said:

    What will DNN do if the training data is astrological data of people as input and their life affairs as output? It can approximate any function in this model. But any function will be garbage anyway.

    Then it would have made the same mistake humans did for thousands of years.

  7. 3 hours ago, studiot said:

    But it needs to develop its response to be correct !

    Humans don't seem bothered by this difficulty.

    But what is correct? Especially in the context of language models.

    Fill in the blank here:

    Red bull gives you _____.

    If you said wings, as did GPT-3, you'd be wrong according to a recent benchmark paper, which argues that models that are not factually correct are simply wrong.

    But this ignores a large function of language. Diabetes might be more factually correct, but it would also be appropriate because it's funny. This is particularly pertinent if we want AI to be creative.

     

    21 minutes ago, Genady said:

    I try to narrow the original passage to one item: AI can't come up with new models, it can only optimize models created by humans.

    I'm not sure that's true given the universal approximation theorem of neural networks, which if i understand it correctly, states that a neural network can (at least theoretically) approximate any model.

  8. 1 hour ago, Genady said:

    I have a vague "Hypothesis 1" regarding the human intelligence's advantage compared to AI:

    AI discovers patterns in input data, while we discover patterns in our own thinking. IOW, the brain discovers patterns in its own activities. Patterns in the input data is a small subset of the latter. 

    I've heard it expressed as AI interpolates, humans extrapolate.

    There's a well known paper in the field,  On the measure of intelligence, that explores this idea quite thoroughly which stresses that intelligence should be measured by a broad class of problem solving, on problems agents have not seen before.

  9. 3 hours ago, beecee said:

    I see the highlighted bit by me as false, and while there maybe a small minority that don't maintain a creation/ID myth, they still certainly maintain, mythical overtones re communion with nature, transcendence, karma and other such  concepts that invokes out of this world type of experiences.

    Again that doesn't sound like science to me.

    https://edge.oregonstate.edu/2017/08/23/the-science-of-karma/#:~:text=“In the Buddhist point of,and OSU's Contemplative Studies Initiative.

    Karma, however, is deeply personal. “In the Buddhist point of view karma is a psychological phenomenon. It happens because of the way the mind works. It’s not some general force that exists in the universe. It’s not the hand of God,” says John Edwards, director of CLA’s School of Psychological Science and OSU’s Contemplative Studies Initiative. “The basic idea is that your own behaviors and actions lead you to experience the world in a certain way.”

    ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

    Psychology of course is best described as a "soft science"

    Just out of interest, my biggest argument in this thread, is the irrationality of the subject at hand, as described by the title......Is it rational (for an athiest) to believe in religion?

    Atheist= a  person who does not believe in a creator/deity/ID/god

    Religion = The belief in a super duper omnipotent being/god, and the supernatural and the paranormal. https://en.wikipedia.org/wiki/Religion

    Rational = A belief based on reason, logic and evidence. 

    https://www.google.com/search?q=rational+foundations+of+religion&rlz=1C1RXQR_en-GBAU952AU952&oq=rational&aqs=chrome.0.69i59j69i57j0i67i131i433j0i67j0i67i131i433j69i60l3.2928j0j7&sourceid=chrome&ie=UTF-8

    "Rationalism holds that truth should be determined by reason and factual analysis, rather than faith, dogma, tradition or religious teaching"

    If you define religious belief as irrational, of course it will be irrational - but that's just a tautology.

    The point is if there are some religious followers, even a minority, consistent with science then faith or supernatural beliefs it is not a defining feature. Therefore it can be rational for an atheist to believe in a religion - depending upon the religious tenants they believe.

  10. 13 hours ago, beecee said:

    I think the WIKI definition, covers all contingencies including my definition.

    On the second statement, all I can say is that again any thought of ID is unscientific, as is all supernatural and paranormal explanations. And I suspect all religions when we get down to the nitty gritty, requires some form of ID, if not then mythical overtones re communion with nature, transcendence, karma and other such  concepts that invokes out of this world type of experiences. That doesn't sound like science to me.

    ID isn't a feature of most religions. Like I mentioned above some religions don't have a creation myth, and some explicitly refuse to answer cosmological questions such as this.

    Even something like karma, which literally means action or more generally consequences of action, can be easily understood in a naturalist framework - recourse to the supernatural is not a defining feature to the belief, even if it is a common one.

  11. 17 hours ago, Genady said:

    It is not only "creativity", not even mostly about it. My doubts are about other human abilities, such as:

    I started to focus on creativity because of words like imaginative, intuition, experience in that passage. I think this is part of the problem - i know what you mean (or at least i think i do - shall we call it the human spark?), but when it comes to defining it precisely enough so it can be measured in a lab its virtually (ha) impossible. Which is why i think goalposts will be continually moved as AI pushes boundaries.

     

    14 hours ago, studiot said:

    By 'prohibit' do you mean absolutely or just prevent some creativity ?

    I don't know of any bar to creativity per se, but observe that creativity is often driven by other factors than preset goals and can arise spontaneously as when a doctor diagnoses a previously unknown disease or condition.

    Not sure, prohibit seems too austere, shall we say limit? If we think of creativity as a multi-faceted thing, then It's possible that different AIs could be more creative than humans in some ways and less creative in other ways.

    In terms of preset goals, we have mesa-optimisers in which the objective function itself is optimised. It has some AI researchers worried, because they believe it means an agent could develop its own goals distinct from what a human originally intended.

    There's also instrumental convergence (or maybe the above is also an example of this) in which 'soft' goals are learnt as a way for optimising a 'hard' goal: things like self-preservation are likely to emerge as soft goals because regardless of what an agent is trying to do, existing helps a great deal.

  12. 25 minutes ago, studiot said:

    You have been watching too much Terminator.

    What, in principle, do you believe will prohibit creative AI?

    2 hours ago, Genady said:

    The question is, are computers as we know them capable for that, or we'll need different underlying principles?

    So far larger neural networks trained on more data haven't seen a plateau in performance which leads some to believe that sufficiently large networks will achieve human level ability. That seems consistent with the universal approximation theorem - assuming creativity is ultimately a computation. My personal guess, based on absolutely nothing, is that gradient based methods won't achieve it, and that some kind of evolutionary update will be required.

    I suspect we will continue to redefine creativity to mean whatever humans can do that AI can't. There are some people who would argue that AlphaGo is creative - apparently people who play Go describe the moves it has created as creative. 

  13. 2 hours ago, beecee said:

    In other words, unscientific.

    That doesn't follow from the definition you provided.

    "generally relates... to supernatural... however, there is no scholarly consensus."

    "Traditionally, faith, in addition to reason, has been considered a source of religious beliefs."

    There are more than enough caveats in your provided definition of religion to have a religion that is entirely consistent with science.

     

  14. Well, although they aren't the norm , there are probabilistic frameworks for deep learning if it is desirable.

    The topic, as i understand it, is what computers can't, and will never be able to do. If we assume that there isn't anything supernatural in our wetware, surely it is only a matter of time before computers can at least recreate our creativity? 

  15. 16 hours ago, Genady said:

    Sorry, can't say anything about Taoism, don't know. But I know a lot about Marxism. Is it a religion?

    12 hours ago, beecee said:

    To be kind to my Taoists friends, at best then a myth?

    Maybe a thread on the definition of religion would be helpful.

     

    12 hours ago, beecee said:

    More to the point "cannot rationally concern itself" with. The smart religions though, (Catholicism) then reluctantly agree with concepts like evolution and the BB, with their own biased "god of the gaps" reasoning.

    Yes, cannot rationally concern itself. Rationality isn't the only trait we want as humans though, is it?

    But i think you are painting all religions based on the attitude of one religion. For instance Buddhism, Toaism and Bahá'í don't even have creation myths in their core teachings (although some communities have adopted various cultural myths), so how does the 'biased god of the gaps reasoning' fit them?

  16. 9 hours ago, Genady said:

    Yes, a probabilistic classification function of DNN is reminiscent of strategic principles.

    In what sense is AlphaGo probabilistic? It searches a probability distribution, but depending on how the weights are initialised given identical inputs (although unlikely) it will give identical outputs.

    5 hours ago, Genady said:

    To train the DDN they "use[d] the RL policy network to play more than 30 million games." How many games a human master plays or studies in their training?

    One shot learning architectures are being deployed so that metric is falling fas, at least for image and text classification. But is it relevant to your OP - as long as an agent can make 'high-level insights', does it matter that the learning regime is not like that for humans?

  17. 53 minutes ago, joigus said:

    Computers play only on the grounds of pure combinatorics. Grand Masters, on the contrary, although they have powerful combinatoric minds by human standards, at some point through the complexity of the game, they must base a significant part of their reasoning on strategic, conceptual principles rather than pure if-then sequences.

    I think Deep Blue had some kind of combinatorial tree search when it beat Kasparov back in the 90s, but it's not quite true of Alpha Go which beat Sedol at Go. Apparently Go has 10^172 possible positions - far too much for a full search. Instead certain branches are selected by a neural network - analogous to how a human might just work on a few likely looking branches in their head before making a move. This blog explains it pretty well.

  18. 1 hour ago, Genady said:

    It is a necessary but not a sufficient condition for a thing to be a religion. Another necessary condition is, belief in supernatural. Atheism doesn't have that.

    Belief in the supernatural is certainly not a necessary condition for a religion. It might be common, but it's not necessary. What supernatural things is it necessary for Taoists to believe?

     

    1 hour ago, iNow said:

    Saying that atheism is a belief system is like saying “not collecting stamps” is a hobby or “not playing golf” is a sport. It’s plainly silly and remedially false. 

    I've come across people who don't believe in god in the same way some people do believe in god: that is they don't really care, haven't at all thought about but are happy to go along with whatever the prevailing thought is in their culture. We might quibble and say it's not really a belief then, but when the census comes round, there's only place for a tick in one box.

    Belief is not a binary thing, humans are more complicated than that.

  19. 21 minutes ago, Genady said:

    For communication. We are irrational inside our heads, but to communicate successfully we need a rational representation of our thoughts. E.g. you've asked the question and I'm trying my best to give a rational answer. If communication is not rational, it is broken. (There are examples of such in some recent threads...)

    I agree to an extent. I didn't mean imply that rationality has no value. Even in the context of religion i think it has great value - there are a certain class of problems, often involving the physical world, which religion simply need not concern itself. Science has it covered.

    In terms of rationality for communication - yes, but i don't think there is much that is rational in our works of art, poetry and stories. Not to say rationality isn't an important feature of many forms of communication, i just don't think it is, or should be, the whole story.

  20. 5 minutes ago, Genady said:

    I rather think that a task in a real life setting is a different kind of task to playing games.

    But in what way? The pertinent feature surely is that both the virtual space and the real lab present an agent with an objective and obstacles. We may in time also want agents that can formulate their own objectives within some constraints.  In terms of creating an agent there is no difference between the real and the virtual world except the complexity of the former compared to the latter. This is what i meant with regard to them being of the same kind.

    For instance, Tesla's self-driving agents are trained in large part in virtual worlds. It is particularly helpful for edge cases - cows in the middle of roads and other bonkers stuff that happens so rarely in the real world the agent struggles to learn from a real example, but that happens enough that the agent needs to learn to deal with it.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.