Jump to content

Ai and religion


Recommended Posts

Not sure where this should be

whenever people talk about AI they always think of it from a western point of view. But what about advanced AI/robotics from somewhere like Saudi Arabia? With really hardline views on  on religion or women. Or China for that matter with hardline views on freedom? 

I can change my mind about about anything anytime I want, but for AI to do that it would need to be able to re program itself and be influenced by outside sources, but would it be a good idea to allow AI to do that? 

Sorry if it's not proper question it's just something I was thinking about in work last night, religious AI and AI with different opinions on human rights.

have looked on search but keeps saying there's a problem...

Link to comment
Share on other sites

28 minutes ago, Curious layman said:

but for AI to do that it would need to be able to re program itself and be influenced by outside sources, but would it be a good idea to allow AI to do that? 

 Once you give AI an objective function to maximise it will do so indiscriminately. If the agent is intelligent enough to realise that someone could try to change its objective function it will take measures to ensure that doesn't happen - as that would interfere with it's current objective. Re-programming such an agent would be difficult - and impossible for itself. This has a name in AI safety circles, but it currently escapes me*.

One solution put forward is actually to allow some doubt over the objective function, so that the agent will have to seek external validation (human satisfaction for instance). Such an agent is constantly re-evaluating it's goals in light of sensory input (humans smiling or something more sensible), and might be safer. 

The biggest problem for AI safety is the likelihood that various states and companies will rush towards developing the technology and so neglect these sorts of safety concerns.

 

*It comes under the banner of Instrumental convergence. Basically unconstrained AI agents might be expected to behave in similar ways, because they all help maximise objectives, regardless of the objective. Things like self-preservation and resource acquisition would help an AI achieve its goals for obvious reasons. Goal-content integrity would similarly help it achieve that goal.

Edited by Prometheus
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.