Jump to content

Can AI create models of the universe?


geordief

Recommended Posts

Models can be seen as edifices of the mind that attempt to predict how the world works.

 

This is a very skilled task.

 

Can it ever be farmed out to AI?

 

Are there any low hanging fruit that could be imagined for such a task?

 

Are there rudimentary models that could be given to an AI machine for it to play around with and potentially improve?

 

Is the idea totally preposterous?

Link to comment
Share on other sites

  • 1 month later...

AI is intrinsically limited by the intelligence of its programmers, so I guess its not possible for machines to "modelize" the unknown using

algorithms based on mathematics and physicalism. However, AI can certainly extrapolate models (I prefer the term "theories") of the universe but

could probably not produces hard scientific evidences about its validity.

Link to comment
Share on other sites

AI is intrinsically limited by the intelligence of its programmers, so I guess its not possible for machines to "modelize" the unknown using

algorithms based on mathematics and physicalism. However, AI can certainly extrapolate models (I prefer the term "theories") of the universe but

could probably not produces hard scientific evidences about its validity.

It's not strictly true that AI is limited by the intelligence of its creators. The best modern AI is designed to take inputs and be shown the desired outputs and then figure out how to get from the one to the other on its own. The people designing it don't necessarily, and in fact don't usually, understand how to get there themselves. That's actually the advantage of using artificial neural networks.

 

Re: the topic at hand,

 

Apparently, Google's new translation AI spontaneously developed a "mental model" of semantics. Rather than simply encoding that cat <> gato, it recognizes that cat and gato have a shared meaning that is also shared by Katze. So as soon as it learns that chat <> Katze, it also knows that chat <> cat and chat <> gato.

 

It was not programmed to do this, but rather it was a happy accident as a consequence of training the same AI to translate between a variety of different languages. Having a deeper semantic pseudo-language allowed it to more quickly learn to translate between languages it had already "learned" without having between taught to translate between those two languages specifically.

Link to comment
Share on other sites

AI is intrinsically limited by the intelligence of its programmers, so I guess its not possible for machines to "modelize" the unknown using

algorithms based on mathematics and physicalism. However, AI can certainly extrapolate models (I prefer the term "theories") of the universe but

could probably not produces hard scientific evidences about its validity.

Also, AI Jeopardy, chess and go playing systems are better Jeopardy, chess and go players than the best humans. I've heard that an AI cancer diagnosing program is better at diagnosing cancer than the best physician. It is unclear what limits on building AI there are, if any.

Link to comment
Share on other sites

Do you mean complex models or something simplified like what our brains provide us with?

It would be nice if the models were able to be simple enough to be understood by the likes of you and me but this seems unlikely as the be all and end all would be whether or not they actually work.

 

Lean and mean models might seem inherently preferable to cumbersome models but beggars may not be able to choose...

 

As Delta implied, these models may have a life of their own quite independently of the original software creators.

Link to comment
Share on other sites

Do you mean in full detail? No, because it would have to include itself and every particle in the universe. Not enough energy in the universe to do that with. But if you mean could we have a really good model of the laws of physics, we already have those. They can simulate the first few seconds of the big bang and the subsequent evolution of the universe. This is old hat.

 

Why do you think AI has anything to do with this? It's humans who build the models. It's important not to get new age-y about all this stuff. Strong AI has been a complete failure and has produced nothing since the idea gained currency in the 1960's. Weak AI of course plays chess and drives cars. Impressive but very specialized problem domains. And whose achievement is it? The computer's? Or the armies of designers and mathematicians and programmers who build the clever little gizmos? The first thing to know about AI is how to separate out the breathless hype from the reality.

Edited by wtf
Link to comment
Share on other sites

Do you mean in full detail? No, because it would have to include itself and every particle in the universe. Not enough energy in the universe to do that with. But if you mean could we have a really good model of the laws of physics, we already have those. They can simulate the first few seconds of the big bang and the subsequent evolution of the universe. This is old hat.

 

Why do you think AI has anything to do with this? It's humans who build the models. It's important not to get new age-y about all this stuff. Strong AI has been a complete failure and has produced nothing since the idea gained currency in the 1960's. Weak AI of course plays chess and drives cars. Impressive but very specialized problem domains. And whose achievement is it? The computer's? Or the armies of designers and mathematicians and programmers who build the clever little gizmos? The first thing to know about AI is how to separate out the breathless hype from the reality.

A model in full detail would become the thing it was modelling so obviously not that.

 

<breathless>Can AI become independent of the creators of its software (create its own path) ?

 

If its workings are too complex for the human mind to follow does that not show it can?

 

Can AI's internal processes become self referential even ? Can it operate a need to improve its own working? (going off at a bit of a tangent but what limits are there to what AI could achieve? Are there any? Where can the line be drawn? Can AI "cut the cord" ?) </ breathless>

 

Sure these may not be immediate ,or even potential concerns -but they do concern me now.

Link to comment
Share on other sites

A model in full detail would become the thing it was modelling so obviously not that.

 

<breathless>Can AI become independent of the creators of its software (create its own path) ?

 

If its workings are too complex for the human mind to follow does that not show it can?

 

Can AI's internal processes become self referential even ? Can it operate a need to improve its own working? (going off at a bit of a tangent but what limits are there to what AI could achieve? Are there any? Where can the line be drawn? Can AI "cut the cord" ?) </ breathless>

 

Sure these may not be immediate ,or even potential concerns -but they do concern me now.

 

Since the binding problem of consciousness has not yet been elucidated, I think we're far from seeing mindful machines. Machine intelligence is a software program, and cannot pass the Turing test until we know how/why humans are self-aware.

Edited by tkadm30
Link to comment
Share on other sites

 

Since the binding problem of consciousness has not yet been elucidated, I think we're far from seeing mindful machines. Machine intelligence is a software program, and cannot pass the Turing test until we know how/why humans are self-aware.

We may never learn why humans are self aware. It may be a problem that retreats ,mirage like as we approach it.

 

It may be that self awareness is embedded into the universe and that our sense of self is a useful mental construct indistinguishable from a figment of our imagination.*

 

Suppose "we" were to create AI program with the express purpose of passing the Turing test and these programs developed their own algorithms in response to outside input?The Turing test does not seem to me to be insurmountable -else why do they keep trying to pass it?

 

*of course we are individually and dynamically separate from other processes and so a sense of self is grounded in reality.

Edited by geordief
Link to comment
Share on other sites

 

Since the binding problem of consciousness has not yet been elucidated, I think we're far from seeing mindful machines. Machine intelligence is a software program, and cannot pass the Turing test until we know how/why humans are self-aware.

That is not strictly true. You don't actually need to understand why something works in order to build it if you can replicate the conditions and get a little lucky.

 

I would be shocked if we didn't build machines that could pass the Turing Test well before we have a real answer to what consciousness is. The former is in sight of where we are now. I don't think the latter is really.

Link to comment
Share on other sites

 

Machine intelligence is a software program, and cannot pass the Turing test until we know how/why humans are self-aware.

 

 

The two are not necessarily related. It may be possible to produce an AI that passes the Turing test (or even a proper test of consciousness) without knowing how the human brain works. It might even be possible to do it without knowing how our computer does it.

Link to comment
Share on other sites

For me, consciousness is another of our senses, similar to sight, hearing, touch, etc., except it senses our thoughts.

Interesting. All the senses have an energy supply. This would seem to be rather different. It would have to tap into resources somewhere presumably.

Link to comment
Share on other sites

The two are not necessarily related. It may be possible to produce an AI that passes the Turing test (or even a proper test of consciousness) without knowing how the human brain works. It might even be possible to do it without knowing how our computer does it.

Just want to point out that the Turing test is not very good. Ironically its weakness is the humans. Any halfway decent chatbot is rated as intelligent by humans. That's why that dumb Eugene Goostman chatbot allegedly passed the Turing test. http://www.smh.com.au/digital-life/digital-life-news/turing-test-what-eugene-said-and-why-it-fooled-the-judges-20140610-zs3hp.html

 

When Joseph Weizenbaum invented Eliza, he intended it to be a demonstration of how dumb computers actually are, even when simulating intelligence. He was shocked to find out that people would start telling it their most intimate thoughts in the delusion that they were speaking to a real therapist. https://en.wikipedia.org/wiki/Joseph_Weizenbaum

 

Weizenbaum was shocked that his program was taken seriously by many users, who would open their hearts to it. Famously, when observing his secretary using the software - who was aware that it was a simulation - she asked Weizenbaum: "would you mind leaving the room please?”.

Edited by wtf
Link to comment
Share on other sites

Just want to point out that the Turing test is not very good.

 

 

Agreed. I have seen some ideas for much better tests of intelligent thought. (Consciousness is much harder, but if a machine shows itself to be capable of intelligent independent thought and then says it has the same concept of consciousness that we do, why would we assume it is lying?)

Link to comment
Share on other sites

 

Wow. It has been more than a decade or since I last checked on the Turing vs chatbot thing - I really thought this would be much more advanced by now. Eugene fails at the first sentence uttered and continues to be awful - though I suppose I have the advantage of knowing the subject. But even so! I mean, if this test is meant to be serious then shouldn`t the reviewer have some prior knowledge of how these things work?

 

Otherwise it`s kind of pointless - it`s not only chatbots who are "dumb" so if you just pit them against random humans, the results might be rather worthless.

Link to comment
Share on other sites

AI is intrinsically limited by the intelligence of its programmers, so I guess its not possible for machines to "modelize" the unknown using

algorithms based on mathematics and physicalism. However, AI can certainly extrapolate models (I prefer the term "theories") of the universe but

could probably not produces hard scientific evidences about its validity.

 

Evolution of us is an example that intelligence is not needed to create intelligence. There is no intelligence driving what changes should / do occur to a life form.

Link to comment
Share on other sites

 

Evolution of us is an example that intelligence is not needed to create intelligence. There is no intelligence driving what changes should / do occur to a life form.

Is it perhaps a bit dogmatic to say "no intelligence"?

 

Maybe better to think of it as nothing we would think of as intelligence.

 

In a metaphorical way might we think of dice having been thrown at the "beginning" of our universe and the consequences played out (in a non deterministic way) ever since?

 

Could those "dice" ( are they being rolled continuously?) -or the way they were rolled be considered intelligent?

 

Does intelligence itself have a random (volatile?) nature?

Link to comment
Share on other sites

Well, there are some interesting points you all have talked about. AI machines, as everybody knows, are characterized by the fact they use 5th generation coding languages which allow them to execute algorithms in a highly efficient way regardless the computational load.

In addition, they can be assisted by cellular automata , so we have a great common fact : the mathematical basis which enables them to recreate multiple situations and develop their possible numerical consequences.

As the last thing we could thing also about the evolution quantum computers and quantum algorithmcs which gives us the chance of simulating two different states (0 and 1) at the same time leading to a great similitude with the current "known" universe, so taking this into account, I would say computers can draw simple models of the universe which can potentially be improved and optimized in order to achieve a greater understanding of the place we live in.

Link to comment
Share on other sites

OP contains adverb "ever" ("can it ever be farmed out?"). Therefore we don`t have to be constrained by current state of affairs.

 

I think it`s safe to assume that in due time AIs will become increasingly complex, in terms of computing power & cleverness of the programmers. At this stage we might start watching for real "intelligence" to emerge. This is - at least the way I see it - the whole point of AI, otherwise it`s just a hype name for advanced set of routines, a la Siri or some such.

 

Along the way there will be more progress in our learning about human intelligence and perhaps consciousness too. These fields will most likely become intertwined. And if we ever succeed in creating true AI - an artificial entity that is capable of thinking/problem solving on its own accord - i` msure it will be able to tackle a model or two.

 

Then we just have to worry about emergence of the ghost in the machine and how not to become enslaved - something any self-respecting AI will surely attempt ;)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.