Jump to content

The human race in the year 3000


Hypercube

Recommended Posts

I'm just curious how many others have the same opinion of our species' future as I do; what do you think the chances are that the human race will still be around come the 3rd millennium? I personally think there's virtually zero chance of that given how we're destroying our planet, eradicating the diseases that keep our population in check, sweeping all new energy technologies that could replace oil/gas under the rug, wasting tens of trillions of dollars on designing new weapons, and other similar acts.

 

In my opinion, our species has a very bleak future, indeed.

Link to comment
Share on other sites

pessimist.

 

we have had the option to destroy ourselves entirely for 2 thirds of a century now. we haven't done it. we've came close granted but common sense has prevailed(or dumb luck) and we didn't do it.

 

most countries with the sayonara option are getting on with each other reasonably well and the one psycho who might have nukes (kim jong il) could just be pretending, he is that crazy and if he does have them, well, they don't work very well and he won't have many.

 

I think we're going to come round quite nicely. It'll get hairier before it gets better but we will get there.

 

besides, with global warming and that it doesn't mean the extinction of our species, it just means we will have to seriously change our lifestyles from a practical view point. there will be more over crowding and there will be some food shortages but I have faith in science and human nature that we won't go and crap it all up beyond repair.

Link to comment
Share on other sites

I think that malthusianism or other population/future pessimism has a very specific cultural function. It generally tends to emerge strong in times of economic recession, imo, because it evokes the ideology of, "hey, we're going extinct as a species so it's ok to kill, destroy, live-it-up, and otherwise live as if there's no tomorrow. Then, the more resources are wasted and people killed, the more "breathing room" there is for the survivors to procrastinate the difficult cultural adaptations that will lead to real sustainability. In other words, pessimism promotes cultural conservatism by basically accepting that the status-quo is only way to live even when it's leading to self-destruction.

 

BTW, I liked the way the OP casually associated elimination of disease as a cause of unsustainability. This is basically like saying, "if people don't die naturally, they'll kill each other artificially because they'll never change their ways so that they can all survive."

Link to comment
Share on other sites

Fairly pessimistic, due to the existential risk of strong AI (in addition to the forementioned ones). However, I suspect space travel will mitigate much of percieved, present day global existential risks.

Link to comment
Share on other sites

Fairly pessimistic, due to the existential risk of strong AI (in addition to the forementioned ones). However, I suspect space travel will mitigate much of percieved, present day global existential risks.

I think post-industrialization will gradually simplify things by combining more traditional ways of living with hyper-modern technologies like AI, IT, etc. I think the main factor that causes the perception that life is getting worse is the fact that modernization generated such high levels of materialism in the 20th century. Now, because materialism/consumerism has become unsustainable, it's easy to mistake the end of a culture for the end of life itself. It's just like when you are used to driving all the time and you decide to limit your driving to absolute necessities, it seems at first like you are stranded until you begin to develop patterns of walking/biking that open up local amenities to you in a new way. It's not that you're actually stranded without modern culture, it's that you're programmed to view every possible alternative as inherently meager.

Link to comment
Share on other sites

I'm just curious how many others have the same opinion of our species' future as I do; what do you think the chances are that the human race will still be around come the 3rd millennium? I personally think there's virtually zero chance of that given how we're destroying our planet, eradicating the diseases that keep our population in check, sweeping all new energy technologies that could replace oil/gas under the rug, wasting tens of trillions of dollars on designing new weapons, and other similar acts.

 

In my opinion, our species has a very bleak future, indeed.

 

I think by then we would be well on our way to colonizing other stars, or perhaps already have done so. And we'd probably have beaten all of the biggest mass extinctinos by far. And I think we wouldn't be genetically compatible with the year 2000 population, due to massive genetic engineering, or perhaps no longer have biological bodies anymore.

 

If you are wondering about this, consider the technological advancement in the last 200 years compared to in the past 20,000 years. Or ask your grandparents to tell you about the first cars and TVs.

 

Fairly pessimistic, due to the existential risk of strong AI (in addition to the forementioned ones). However, I suspect space travel will mitigate much of percieved, present day global existential risks.

 

That would be one of the few things that could really kill us off.

Link to comment
Share on other sites

What's with the pessimism about AI? It is like the Terminator movies are getting elevated to sacred status. Is it because people think AI would eschew human interests over other forms of rationality? That is an old philosophical argument. Why can't AI simply be programmed with ethical interest constraints?

Link to comment
Share on other sites

What's with the pessimism about AI? It is like the Terminator movies are getting elevated to sacred status. Is it because people think AI would eschew human interests over other forms of rationality? That is an old philosophical argument. Why can't AI simply be programmed with ethical interest constraints?

 

Strong AI would be far superior to us in just about every respect. It won't be like the Terminator movies; we wouldn't stand a chance. The robots would have tougher bodies, better technology, better accuracy, better senses, better teamwork, better strategy, shorter reproductive times, total unity, the element of surprise, and the ability to use biological weapons, starvation, or genetically engineered diseases against us. Furthermore, the robots would probably have no empathy and no need for us even as slaves. They might wipe us out simply for being in the way, like we kill off the animals in our farms.

Link to comment
Share on other sites

AI would be far superior to us in just about every respect. It won't be like the Terminator movies; we wouldn't stand a chance. The robots would have tougher bodies, better technology, better accuracy, better senses, better teamwork, better strategy, shorter reproductive times, total unity, the element of surprise, and the ability to use biological weapons, starvation, or genetically engineered diseases against us. Furthermore, the robots would probably have no empathy and no need for us even as slaves.

But if they had a prime directive of ethical treatment of life, they would avoid using their "superior intellects" against humans or other living things, wouldn't they? Or are you thinking they would reason their way out of ethical directives like, "don't abuse living things?"

Link to comment
Share on other sites

Consider Asimov's Three_Laws_of_Robotics

http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Almost all of Asimov's many stories result in very bad things happening to humans by robots following these laws. Note that we haven't agreed on what life, human, person, etc mean (we have threads discussing those definitions), which is part of the problem. And how do you even communicate these concepts to an AI? We don't have an AI that can understand language yet. In any event, an AI that is the moral equivalent of Mother Teresa would be of little value to its creators as it would not want to work for profit. Don't get me wrong, an AI could be the best thing to ever happen to humanity -- but such a powerful thing will be dangerous. And more so because we humans will abuse it first.

Link to comment
Share on other sites

Consider Asimov's Three_Laws_of_Robotics

http://en.wikipedia....aws_of_Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Almost all of Asimov's many stories result in very bad things happening to humans by robots following these laws. Note that we haven't agreed on what life, human, person, etc mean (we have threads discussing those definitions), which is part of the problem. And how do you even communicate these concepts to an AI? We don't have an AI that can understand language yet. In any event, an AI that is the moral equivalent of Mother Teresa would be of little value to its creators as it would not want to work for profit. Don't get me wrong, an AI could be the best thing to ever happen to humanity -- but such a powerful thing will be dangerous. And more so because we humans will abuse it first.

 

It's been a long time since I read any Asimov but "Almost all of Asimov's many stories result in very bad things happening to humans by robots following these laws." reads more like a summary of the films based on Asimov rather than the books themselves

Link to comment
Share on other sites

OK, I wanna make one thing perfectly clear right now; I am not a pessimist. A pessimist never looks on the bright side, whereas I always do when the occasion calls for it. In fact, that's the primary reason why I don't believe diseases should be eradicated, UNLESS they pose a legitimate threat to our species as a whole; if an airborne strain of Ebola emerges, I would absolutely think it should be eradicated. In my opinion, eradicating disease will (and is) have the same effect as killing all the wolves in Yellowstone way-back-when to "save the elk", look how that turned out. It's the exact same principle for us.

 

But I digress; I'm not a pessimist, I'm a realist, and unless the human race experiences some sort of miraculous leap of wisdom, the Earth will eventually no longer be able to support us; remember how the Earth was in 'Avatar'? It's inevitable.

 

So if that is being pessimistic, then maybe I am, but it doesn't mean that I'm wrong.

Link to comment
Share on other sites

( realist) The world will probably be here in another thousand years, The quality of life depends on how we as a race choose to treat it. If pollution and other things we tend to be slow at getting to continue then ( cough cough) its not going to be to pleasent walking around with air masks. But i have faith in humanity that we will presevere and in man's darkest hour we shall triumph proudly! And nukes well.... I doubt america is just sitting around letting communist have them without close supervision of our ranger's to supress any or all nuculear attacks. a big IMO>

Link to comment
Share on other sites

I think post-industrialization will gradually simplify things by combining more traditional ways of living with hyper-modern technologies like AI, IT, etc. I think the main factor that causes the perception that life is getting worse is the fact that modernization generated such high levels of materialism in the 20th century. Now, because materialism/consumerism has become unsustainable, it's easy to mistake the end of a culture for the end of life itself. It's just like when you are used to driving all the time and you decide to limit your driving to absolute necessities, it seems at first like you are stranded until you begin to develop patterns of walking/biking that open up local amenities to you in a new way. It's not that you're actually stranded without modern culture, it's that you're programmed to view every possible alternative as inherently meager.

I agree that our degree of materialism will have to taper off. How that will happen is going to be very interesting. Can we substitute the innovation that comes from market competition for something that doesn't mean eleventy-thousand brands of toasters, or laptops that can't share power cords?

 

Will we eventually realize that convenience usually carries too high a price tag? That we can either have disposable products or disposable income but not both?

 

I think we need a technological breakthrough like AI or really cheap energy if we're going to break our current cultural cycle. I think we can at least harness the energy available to us in our own solar system by the year 3000 if we can work as a planet.

Link to comment
Share on other sites

I think we need a technological breakthrough like AI or really cheap energy if we're going to break our current cultural cycle. I think we can at least harness the energy available to us in our own solar system by the year 3000 if we can work as a planet.

I think cheap energy actually alienates people from their most inalienable source of energy, their own bodies. The Matrix movie made quite a shocking idea of using human body energy as an energy-source, but I think that in the very distant future, cultural-economic efficiency will have evolved to the point that humans can live with amazing levels of comfort using only their body energy. I think the 20th century was a set-back to efficiency evolution because of the focus on abundance of energy and industrial production generally. It was a necessary step to achieving the technological advances that have made miniaturization and other major advances in efficiency possible, but it will take a long time to get beyond the assumptions and dependencies that have been created.

Link to comment
Share on other sites

  • 2 weeks later...

Strong AI would be far superior to us in just about every respect. It won't be like the Terminator movies; we wouldn't stand a chance. The robots would have tougher bodies, better technology, better accuracy, better senses, better teamwork, better strategy, shorter reproductive times, total unity, the element of surprise, and the ability to use biological weapons, starvation, or genetically engineered diseases against us. Furthermore, the robots would probably have no empathy and no need for us even as slaves. They might wipe us out simply for being in the way, like we kill off the animals in our farms.

 

Maybe so, but it seems that anything so sufficiently advanced would have less need for bodies than we would. They could exist by the trillions in very small spaces. In my opinion they would simply not be competing with us for the space on this planet. They would almost certainly want to explore the universe, and with probably indefinite life spans they could do this. There´s no particular reason they should be malicious, but they might be indifferent. That indifference might cause some problems as they just take what they want, but I doubt they would simply kill everybody for no particular reason.

 

There always seems to be a presumption that AI will have ´no emotions´and will therefore only ever do the most utilitarian thing. If they are indeed super intelligent, why would they not also be infinitely more creative and insightful than us? They could create the most amazing art or music (or whatever equivalent they would find easiest to digest in whatever form they are in). Although we use animals in horrific ways, we don´t arbitrarily exterminate most of them. Wait, did I just say that? Maybe we do, but not the pretty ones!

 

So, my prediction is we build them, they build themselves to a point where they leave us to wallow in our own filth on earth without so much as a goodbye, thanks a lot. Like ungrateful teenagers with hyper-intelligence.

Link to comment
Share on other sites

Strong AI would be far superior to us in just about every respect. It won't be like the Terminator movies; we wouldn't stand a chance. The robots would have tougher bodies, better technology, better accuracy, better senses, better teamwork, better strategy, shorter reproductive times, total unity, the element of surprise, and the ability to use biological weapons, starvation, or genetically engineered diseases against us. Furthermore, the robots would probably have no empathy and no need for us even as slaves. They might wipe us out simply for being in the way, like we kill off the animals in our farms.

Why is it that when people think of something that thinks logically and is far better than us at doing so, they think that the first thing it will do is to wipe us out? Sure, it makes great science fiction, but it is just fiction and done for drama (it wouldn't be a good story if someone makes a super AI and it just gets on with what it does and doesn't interact with us at all).

 

Basically the problem is that they say they think logically and without emotion, but then apply emotional thinking to the robots actions (eg: their actions are the same as a psychopath or socio path would do, but these type of people lack empathy but still have emotions).

 

You sort of hit on it in your post:

 

They would be able to use biological weapons or starvation or diseases as they are not biological, but this means they are not going to compete with us for food.

 

They would have no use of us as slaves. But that means they have no reason to try to conquer us (we don't have food and we aren't good labour).

 

About the closest you come is that they might wipe us out for being in their way. But remember they think logically. Sure, we might be in their way, but they will sustain losses on their side if they do try to wipe us out, it will take a lot of effort to wipe us out, etc. However, compared to that leaving Earth is trivially easy, especially for a "race" that can rebuild their bodies to fit their needs (as AI is a computer program, then as long as that core processor and program are functioning the rest is arbitrary) and does,'t have the weaknesses of biological entities to space, then it would be logical to leave Earth and the problems of dealing with humans behind.

 

The other option is to become symbiotic with humans and work with them to improve them to the point we can function on their level. A forced adoption would not work as it has the same drawbacks as war (and result in a parasitic - rather than symbiotic - relationship between us and them). It would require a willingness on their part and our part to work and that gives the a reason to not go on a killing rampage.

 

As for where I think humans will be in the year 3000, well I think the species will be still here, but society will be very different. What that society will be like is something that can not be predicted.

Link to comment
Share on other sites

Why is it that when people think of something that thinks logically and is far better than us at doing so, they think that the first thing it will do is to wipe us out?

 

I doubt we'd give them the choice. We're a very xenophobic species, and on a purely practical level the AI is a threat to us. Most of us would have no empathy with them because they probably wouldn't have cute organic bodies and we probably would not think them persons. And if we don't try to wipe them out we'd probably expect them to be our slaves. Do you disagree?

Link to comment
Share on other sites

I doubt we'd give them the choice. We're a very xenophobic species, and on a purely practical level the AI is a threat to us.

[cut: see below]

And if we don't try to wipe them out we'd probably expect them to be our slaves. Do you disagree?

Oh yes, Humans will be Humans, and we would probably start something. It is just that in most scenarios that are stated, it is the robots that start it.

 

Most of us would have no empathy with them because they probably wouldn't have cute organic bodies and we probably would not think them persons.

But as I said, they would have the ability to change their appearances (as their existence would not depend on a specific form like organic beings do). Thus knowing that we would attack them, they could refashion themselves into cute forms that we would not feel so xenophobic about.

 

That is if they follow the symbiotic path rather than the exodus path.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.