Jump to content

Economics of A.I.


AbnormallyHonest

Recommended Posts

It seems to me that programming an artificial intelligence should be based on the economics of biological logic, indifference curves.

 

For example, if a program were installed into an independently mobile machine that began with linear indifference curves, which is what you would expect from logic. What if those linear indifference curves could change based on intersections with other curves or from the statistical information of its individual experience.

 

Say, there were a 1 to 1 relationship between how to react to a low power situation. The machine could either use its remaining power to find another power source or shut down conserving power and its basic functions until help arrived. For a machine that, in the past was closer to power sources, it would routinely make for the power because it was more advantageous, and for a machine that was typically too far to travel to a power source, it would typically wait for help. Those experiences would change their linear indifference curves based on their experiences.

 

Now say both machines are placed in a situation that it is neither advantageous nor disadvantageous to travel or to shut down. Each machine would choose differently based on their experiences... they would have an opinion. They would both have individual perspectives based on the same programming, neither would be "wrong" but with enough areas of indifference that could intersect, the complexity of personality might become defineable. (Think R2-D2 and C-3P0 on Tatooine in Episode IV)

 

Compared to a machine without adaptable indifference, which would probably neither shut down nor travel because it wouldn't be able to decide; which would be a less desirable trait, and therefore not selected to be programmed into other machines.

 

How many curves and how much experience would be required before the program displayed a preference for preservation of self over a logical preference for cooperative behavior for the best chance of survival for any "version" of its program? At that point would we call it "self aware"?

Edited by AbnormallyHonest
Link to comment
Share on other sites

It seems to me that programming an artificial intelligence should be based on the economics of biological logic, indifference curves.

Just as people can learn and use indifference curves, a conscious AI could learn and use indifference curves. Thus, special programming to predisposition AI to use indifference curves is not actually necessary. If there is research that indicates indifference curves will make a superior AI, please provide a link.

Link to comment
Share on other sites

I'm not actually sure that there is any research done on the proposition of indifference curves for A.I. This was just a speculation of my own. I would speculate that the initial linear curve would need to be programmed in order for the experience to be acquired. Once an experience renders the curve as nonlinear, it will always have a preference, because it cannot be a 1 to 1 indifference... assuming that the initial experiences were not 1 to 1.

Edited by AbnormallyHonest
Link to comment
Share on other sites

Do you think indifference curves would help an automobile autopilot make decisions about road conditions, signs, other autos, etc? It seems autopilots will be our first big use of AI. It is available in some cars now, e.g., Tesla, and will probably be generally available by 2021. Laws will have to change for autopilots to drive unassisted, but corporate power will make that happen. See autopilot video.

Edited by EdEarl
Link to comment
Share on other sites

Do you think indifference curves would help an automobile autopilot make decisions about road conditions, signs, other autos, etc? It seems autopilots will be our first big use of AI. It is available in some cars now, e.g., Tesla, and will probably be generally available by 2021. Laws will have to change for autopilots to drive unassisted, but corporate power will make that happen. See autopilot video.

 

I would say that indifference curves wouldn't be required to navigate the traffic regulations, but they would be helpful in making decisions based on non regulatory conditions. For people of different geographical locations, their driving habits, observance of the law, or lack thereof, weather conditions, or unanticipated changes such as detours, construction zones, new roads, accidents, or emergency situations. I believe being able to learn adaptively would create a driving A.I. that was more perceptive of, not only their most commonly traveled routes, but also of the passengers that they may be chauffeuring. This could improve all experiences related to the A.I. as well as allowing the software to evolve over time, reducing the necessity of updating to keep current, or possibly even producing it's own updates that could be shared over a network that might be instituted for A.I.'s traversing differing driving locales.

Edited by AbnormallyHonest
Link to comment
Share on other sites

AI systems are made of artificial neurons in a complex network; that is, they simulate biological brains to some extent. Thus, a neural net like AlphaGo, that played Go with a world champion and won 4 of 5 games, is trained to play Go, not programmed. Moreover, it can be trained to do virtually anything a person can do. However, our computers are too small and slow to train it to do everything a person can do, and we may not understand how to make a conscious AI and other AI features.

Edited by EdEarl
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.