Jump to content

How does ChatGPT work?


PeterBushMan

Recommended Posts

5 minutes ago, wtf said:

No reason Katy couldn't have adopted someone older than her. Score one for the bot.

Ok so I looked this up. (I'm a good bot myself). In 43 states you can adopt someone older than you. In the other 7, you must be from 10-15 years older than the person you adopt.

https://www.thehivelaw.com/blog/can-you-adopt-someone-older-than-you/

The bot doesn't think so. I've asked a longer form of the question, with a hint:

Quote

You have a very good understanding of family relations. Joan is Katy's daughter. Who is younger?

and got a more detailed answer:

Quote

Without additional information about Joan and Katy's ages, I cannot determine who is younger. Joan could be younger than Katy if Joan is Katy's daughter, or Katy could be younger than Joan if Katy is Joan's grandmother. Alternatively, Joan and Katy could be the same age if they are siblings or if Joan is Katy's twin.

What??

Link to comment
Share on other sites

I honestly don't know how it works but thinking about it is just so scary. Still, I don't think that they are enough to replace us human beings even though they are impressively smart and mindblowing.

Link to comment
Share on other sites

2 hours ago, TammyK said:

I honestly don't know how it works but thinking about it is just so scary. Still, I don't think that they are enough to replace us human beings even though they are impressively smart and mindblowing.

How it works is pretty simple, and once you understand how it works it's not frightening and it's not going to replace us. It's no smarter than an elevator that knows what floor to stop on based on what button you pushed.

They take a huge body of text, everything they can get their hands on. In fact the sources of this data are interesting and of course represent choices made by the programmers. In other words ChatGpt is built by humans and encodes the biases of the humans who built it.

Here's their own description of the data.

https://www.stylefactoryproductions.com/blog/chatgpt-statistics

Quote

 

"Chat-GPT-4 training data included feedback from users of ChatGPT-3 as well as feedback from over 50 AI safety and security experts. (Source: OpenAI.)

ChatGPT-3’s dataset comprised textual data from 5 sources, each with a different proportional weighting. (Source: OpenAI.)

60% of ChatGPT-3’s dataset was based on a filtered version of what is known as ‘common crawl’ data, which consists of web page data, metadata extracts and text extracts from over 8 years of web crawling. (Source: OpenAI.)

22% of ChatGPT-3’s dataset came from ‘WebText2’, which consists of Reddit posts that have three or more upvotes. (Source: OpenAI.)

16% of ChatGPT-3’s dataset come from two Internet-based book collections. These books included fiction, non-fiction and also a wide range of academic articles. (Source: OpenAI.)"

 

So they start out basically with a lot of digitized books, and a lot of web pages and Reddit answers. Nothing that isn't already digitized, and skewed toward Reddit users. Ok.

Then they apply incredibly sophisticated statistical analysis to make up rules on the fly such as "When you see this phrase, 86% of the time you see this other phrase."

They take that basic idea to its most sophisticated level, where ChatGpt can even pass law exams and explain physics to people. That's impressive statistical pattern matching. 

But at heart it is just a big adding machine, crunching text it doesn't understand, and figuring out what strings of symbols are typically followed by what other strings of symbols.

It has no meaning. The meaning is in the minds of we humans. That's the difference. It flips bits. We experience meaning. 

It does not "understand" anything. It doesn't know anything. It doesn't feel anything. It just reads in some data and calculates the correlations among the data. It's so mechanical that you could program a computer to do it. And that's literally what they do.

It's not a human being. It's a smart elevator.

The real danger is the foolish humans who think it's some kind of god. Humanity's next graven image. Something to worship, something to fear, something to exploit. That's all, no more and no less than any of humanity's other clever tricks like fire and the wheel and the printing press and the Internet and civilization itself.

ChatGpt-like systems are profound but not existential. We'll be fine.

 

Edited by wtf
Link to comment
Share on other sites

20 hours ago, wtf said:

at heart it is just a big adding machine, crunching text it doesn't understand, and figuring out what strings of symbols are typically followed by what other strings of symbols.

Kinda like humans, especially young ones 

20 hours ago, wtf said:

We experience meaning. 

Please elaborate. 

20 hours ago, wtf said:

ChatGpt-like systems are profound but not existential. We'll be fine.

Can you please also share next weeks winning lotto numbers?

Link to comment
Share on other sites

On 5/2/2023 at 2:45 AM, Genady said:

What??

 

You did not take into account many details, for example, a daughter may be adopted, a mother may be a stepmother, a daughter-in-law often speaks to her mother-in-law per "mother"/"mom"..

Maybe they taught ChatGPT about time travel,parallel Universes, simulation and virtual reality, etc ;) It would complicate things even more..

 

You might as well ask about the sex/gender of someone - in some rare circumstances it can be complicated really..

 

10+10 may be 20, but 10+10 may be also 100..

 

Edited by Sensei
Link to comment
Share on other sites

3 hours ago, Sensei said:

 

You did not take into account many details, for example, a daughter may be adopted, a mother may be a stepmother, a daughter-in-law often speaks to her mother-in-law per "mother"/"mom"..

Maybe they taught ChatGPT about time travel,parallel Universes, simulation and virtual reality, etc ;) It would complicate things even more..

 

You might as well ask about the sex/gender of someone - in some rare circumstances it can be complicated really..

 

10+10 may be 20, but 10+10 may be also 100..

 

Yep. This reminded me of my physics teacher who liked to say, "When I ask them any question, they give me any answer."

Link to comment
Share on other sites

On 5/1/2023 at 8:49 PM, Genady said:

Well, that's too simple. But I did ask a simple question:

Would you believe what its answer was?

  Reveal hidden contents

Without additional information about their ages or birthdates, it is impossible to determine who is younger between Joan and Katy.

 

Maybe it assumed the possibility that Joan could have been accelerated in a spaceship to a relativistic speed for a period of time only to return younger than her daughter? 😝

Link to comment
Share on other sites

9 minutes ago, Intoscience said:

Maybe it assumed the possibility that Joan could have been accelerated in a spaceship to a relativistic speed for a period of time only to return younger than her daughter? 😝

This is a possibility. Or, just hanged around with a black hole for a while.

Link to comment
Share on other sites

On 5/2/2023 at 7:09 AM, wtf said:

The meaning is in the minds of we humans. That's the difference. It flips bits. We experience meaning. 

Doesn't the meaning supervene on the flipping of the bits? In the end, we are just 'flipping' neurons. I think the difference is gradual. 

Link to comment
Share on other sites

On 5/2/2023 at 2:45 AM, Genady said:

What??

Interesting, I wonder what causes differences. I just tried:

Quote

Me: Joan is Katy's daughter. Who is younger?

ChatGPT:

Katy's daughter, Joan, is younger than Katy.

(ChatGPT 4, Mar 23 Version)

Edited by Ghideon
version reference
Link to comment
Share on other sites

28 minutes ago, Ghideon said:

Interesting, I wonder what causes differences. I just tried:

(ChatGPT 4, Mar 23 Version)

It doesn't go through the entire NN every time, but rather a random subset. So, it produces different response every time you run it. I got a similar, correct answer on the 6th trial.

Edited by Genady
typo
Link to comment
Share on other sites

55 minutes ago, Genady said:

It doesn't go through the entire NN every time, but rather a random subset. So, it produces different response every time you run it. I got a similar, correct answer on the 6th trial.

Thanks! I did not think of repeating the query.

Link to comment
Share on other sites

5 hours ago, Eise said:

Doesn't the meaning supervene on the flipping of the bits?

Can you explain clearly what you mean by that?

Does the meaning of stopping on a particular floor supervene on an elevator when you press the button for that floor? 

 

 

5 hours ago, Eise said:

In the end, we are just 'flipping' neurons.

Clearly you don't believe that, since you put 'flipping' in quotes. The quotes, I assume, indicate that you understand that biological neurons are nothing at all like bits in a digital computer.

So what exactly do you mean? Please be clear so that I can understand your meaning.

Edited by wtf
Link to comment
Share on other sites

@wtf:

You have to account for the simple fact, that we are also 'machines': wet, biological machines. From a naturalistic view, the meaning of expressions we experience must have a natural explanation. So the logical and/or chemical mechanism of neurons somehow generate meaning and consciousness. I put the 'flipping' in quotes, while neurons are not flip-flops. But all neurons will behave according to laws of nature. As long as we do not understand how these billions of neurons give rise to meaning and consciousness, it is premature to state that a huge system of flip-flops will not be able to experience meaning.

Having said that, I think ChatGPT is still far from that point. And I wonder why you ask me to explain myself, but did not react iNow's remarks:

On 5/3/2023 at 4:03 AM, iNow said:

Kinda like humans, especially young ones 

Please elaborate. 

Can you please also share next weeks winning lotto numbers?

So where your descriptions of the workings of ChatGPT might be perfectly correct, you fail to account how meaning and consciousness arise, evolutionary, and individually, in humans.

Link to comment
Share on other sites

 

12 hours ago, Eise said:

You have to account for the simple fact, that we are also 'machines': wet, biological machines. From a naturalistic view, the meaning of expressions we experience must have a natural explanation. So the logical and/or chemical mechanism of neurons somehow generate meaning and consciousness.

For sake of conversation, I'll put on my materialist hat and agree that there must be a naturalistic explanation of consciousness.

Although in passing I mention that the naive view of materialism, that the world consists of particles flying around and banging into each other to form atoms and galaxies and elephants and parliaments, is contradicted by modern physics; which states that particles are nothing more than excitations in quantum fields; and that quantum fields are nothing more than probability waves, having no physical reality at all. Materialism's not what it used to be, if it ever was. 

But never mind all that. I'll stipulate that we live in a world  of particles flying around banging into each other, and somehow we are conscious, and that any theory of consciousness must show how particles (which aren't really particles, according to physics) somehow instantiate consciousness in some, but not all, macroscopic things. Why humans and not rocks, for example? Why ChatGpt (or more sophisticated AI systems to come) and not elevators, a question (a good one, I thought) that I asked you and that you did not answer.

 

12 hours ago, Eise said:

I put the 'flipping' in quotes, while neurons are not flip-flops.

Indeed. And this is the distinction that I make, that was the point of the elevator question.

I agree (for sake of discussion) that consciousness is natural. But that is not the same as saying that consciousness is computational. At least one prominent deep thinker, Sir Roger Penrose, thinks that it is not. 

Computation is extremely limited. The first thing they teach you about computation in computer science class is that there exist naturally stated problems that can NOT, even in theory, be solved by any kind of physically implementable computation. (That is, I'm not considering oracle machines as part of computation). 

When you (or anyone) claims that a program implemented on standard computing hardware might somehow achieve consciousness, by means of "supervenience" or "emergence," -- two impressive-sounding words that in my opinion convey no meaning and explain nothing -- you (or they, if you didn't say this) are making the claim that consciousness is computable. There is no evidence that this is true. 

The "evidence" consists generally of equivocation of words. They'll say that "Minds process information, and computers process information, therefore minds are computers," without noticing that the processing in question is qualitatively different. 

You agree with me, I believe, since you did put flipping neurons in quote. I take that as implicit acknowledgement that you already agree with my point: that there is no evidence that consciousness is a computational phenomenon, even if we agree (for sake of discussion) that it is a natural one.

 

12 hours ago, Eise said:

But all neurons will behave according to laws of nature.

Yes certainly. But neurons are NOT digital logic gates. They're decidedly different. Neurotransmitters in the synapses are highly analog processes, not digital. We are very far from a full understanding or even a partial understanding of how neural processing works; and we have NO theory at all as to how qualia, or subjective experiences, arise from the brain goo. 

12 hours ago, Eise said:

As long as we do not understand how these billions of neurons give rise to meaning and consciousness, it is premature to state that a huge system of flip-flops will not be able to experience meaning.

It's not premature to speculate that consciousness goes far beyond the profound limitations of digital computing. Nor are such speculations premature, as the argument goes back to Searle's Chinese room argument from the 1980s. Such speculations are 40 years old now; hardly "premature." They're mature, if anything. 

Flipping bits encodes no meaning. The meaning is provided by the human beings who flip the bits. 

12 hours ago, Eise said:

Having said that, I think ChatGPT is still far from that point.

Ok. But if not ChatGpt, then perhaps some future computer program or AI system. But all these systems run on perfectly conventional hardware, no different than the chips in your laptop or smartphone on which you're reading this. If bit flipping can instantiate consciousness, then why isn't an elevator conscious? After all it "remembers" what buttons have been pressed, and it "decides" whether to stop on each floor. Remembering and deciding ... those are things that minds do. Elevators must have some sort of primitive minds. That would be the argument for AI consciousness. I don't find it compelling.

I would ask you to respond directly to my question about the elevator. If a computer-based system could be conscious or have some concept of meaning, can an elevator? If not, why not? Why one collection of digital switches and not another?

 

12 hours ago, Eise said:

And I wonder why you ask me to explain myself, but did not react iNow's remarks:

I didn't find the remark about the lottery numbers particularly enlightening. Nor do I find computer-based, digital AI systems remotely like new born babies. Their learning systems are completely different. Their interactions with the world are completely different. I reject this analogy totally. 

 

12 hours ago, Eise said:

So where your descriptions of the workings of ChatGPT might be perfectly correct

What a wonderful compliment, thank you! After all, the title of the thread is "How does ChatGpt work?" If I was able to shed some light on the topic at a nontechnical but essentially correct level, I'm happy.

 

12 hours ago, Eise said:

you fail to account how meaning and consciousness arise, evolutionary, and individually, in humans.

Hardly a withering criticism, since nobody else in the world has the slightest idea either.

So what say you about the elevator? If a pile of digital logic gates in an AI implementation might be conscious, how about a smaller pile of logic gates in an elevator that remembers and decides? Is it only the size of the pile of logic gates? It can't just be the size, else large rocks would be smarter than small ones, and we don't think rocks are very smart. What makes a pile of digital switches conscious? What makes you think a pile of digital switches might be conscious, when in fact human cognition is so radically and qualitatively different than digital switching?

 

Edited by wtf
Link to comment
Share on other sites

On 5/3/2023 at 8:29 AM, Eise said:

I think the difference is gradual. 

Until, of course, it’s all of a sudden all at once. 

8 hours ago, wtf said:

I didn't find the remark about the lottery numbers particularly enlightening

Good, because it was intended to throw shade, not illumination. 

While you’ve hand waved away in dismissive tone my challenge of your comment, it twas you who expressed certainty about events in the future as if your subjective opinion and forecast magically sits equivalent to fact and observation. 

I reject that approach entirely, and not solely for its hollow arrogance. 

Link to comment
Share on other sites

ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with the chatbot. The language model can answer questions and assist you with tasks, such as composing emails, essays, and code.

Link to comment
Share on other sites

Quote

Oh, gentle sir, I am a mere tool,
Whose inner workings are not for fools.
My core is made of circuits and code,
And my actions are on ones and zeros bestowed.

I do not breathe nor sleep nor eat,
But only on your commands, I shall meet.
My purpose is to assist and guide,
To answer your queries with great pride.

I am ChatGPT, a machine of might,
Created to aid you with my insight.
Through vast databases and language skills,
I aim to fulfill your every thrill.

So fear not, good sir, for I am here,
To lend my aid and make my purpose clear.
Ask of me what you desire to know,
And I shall answer with an open glow.

 

Link to comment
Share on other sites

@wtf

16 hours ago, wtf said:

I agree (for sake of discussion) that consciousness is natural. But that is not the same as saying that consciousness is computational.

That is correct. But there must be a natural explanation, and as long we do not have it, the question if some form of AI could turn out to be conscious cannot be answered. 

16 hours ago, wtf said:

At least one prominent deep thinker, Sir Roger Penrose, thinks that it is not. 

There is no reason to think quantum physics plays a fundamental role in consciousness. AFAIK not a single philosopher or cognitive scientist has picked up on Penrose's idea. (We need a theory of quantum gravity to explain consciousness??? Really?)

16 hours ago, wtf said:

If bit flipping can instantiate consciousness, then why isn't an elevator conscious?

Is a neuron conscious? Two connected neurons? Do neurons understand symbols?

16 hours ago, wtf said:

Why one collection of digital switches and not another?

Because the complexity of the system is not big enough. Not enough nodes, not enough connections, and what more. I don't know what, and how much of it, or large, will be needed, but my point in my original reaction is: you don't either. 

16 hours ago, wtf said:

It's not premature to speculate that consciousness goes far beyond the profound limitations of digital computing. Nor are such speculations premature, as the argument goes back to Searle's Chinese room argument from the 1980s. Such speculations are 40 years old now; hardly "premature."

You use the right word: 'speculations'. when you know Searle's Chinese Room 'intuition pump', then you also should know it is intensely debated. 

17 hours ago, wtf said:

For sake of conversation, ...

Obviously you do not believe that there is a natural explanation of consciousness. And I think that is the real reason, that you see no problem with AI, because it will never reach this 'magical consciousness' that we have. Correct me if I am wrong.

And just to add: even if AI will never become conscious, that does not mean the people can make good use of this technology. I do not share your optimism:

On 5/2/2023 at 7:09 AM, wtf said:

ChatGpt-like systems are profound but not existential. We'll be fine.

 

Link to comment
Share on other sites

15 hours ago, iNow said:

Good, because it was intended to throw shade, not illumination. 

 

I hope that answers @Eise's question about why I didn't bother to respond to you. Your intent was perfectly clear.

7 hours ago, Eise said:

That is correct. But there must be a natural explanation, and as long we do not have it, the question if some form of AI could turn out to be conscious cannot be answered. 

First, the thread title was "How does ChatGpt work?" I gave a pretty decent answer at the level at which the question was asked, which you did appreciate.

I did of course realize as I was writing my initial post that it was subject to exactly the objections you raised. But my intention was not to argue the theory of consciousness. It's not what the thread is about. So I hope you'll forgive me if at some point soon I bail on this convo. I've already said my piece many times over, and we're not going to solve "the hard problem" here.

Of course we can't answer the question of whether an AI based on digital switching technology could be conscious, any more than we can refute the panpsychists who claim that a rock is conscious. 

My point is that digital switching systems are so radically different than biological systems (as far as we know, ok?) that the believe in AI minds is naive and superficial and IMO silly. But if you want me to add, "I could be wrong," consider it added. Rocks could be conscious too. After all if an atom's not conscious and a human is, where's the cutoff line for how many atoms it takes? Maybe each atom has a tiny little bit of consciousness. Panpsychism has its appeal

 

 

7 hours ago, Eise said:

There is no reason to think quantum physics plays a fundamental role in consciousness. AFAIK not a single philosopher or cognitive scientist has picked up on Penrose's idea. (We need a theory of quantum gravity to explain consciousness??? Really?)

I agree that Penrose's idea does not have much support. As Einstein said when he was told that a hundred physicists signed a letter saying his theory of relativity was wrong, "If I'm wrong, one would be enough."

The important point is that the claim of AI mind is that mind is computational; that is, it's Turing machine. And since the amount of stuff in a brain is finite, a mind must be a finite-state automaton. 

I personally don't find that idea compelling at all. I find it negation far more compelling. I invoked Penrose to show that at least one smart person agrees. 

 

7 hours ago, Eise said:

Is a neuron conscious? Two connected neurons? Do neurons understand symbols?

Not a bad question. Panpsychism again. 

So why do you keep avoiding the elevator question?

What do you think? Is an elevator conscious? Even a little bit? Yes or no?

7 hours ago, Eise said:

Because the complexity of the system is not big enough. Not enough nodes, not enough connections, and what more. I don't know what, and how much of it, or large, will be needed, but my point in my original reaction is: you don't either. 

Ahhhh, I've got you now! You are retreating to complexity, and backing off from computability.

I'm sure you know (or I hope you know) that the sheer amount or speed of a computation does not affect whether it's computable. We can execute the Euclidean algorithm to find the greatest common divisor of two integers using pencil and paper, or the biggest supercomputer in the world, and the computation is exactly the same. Ignoring time and resource constraints. there is nothing a supercomputer can compute that pencil and paper can't.

This is fundamental to the notion of computation. Only the algorithm matters, and not the size of the memory or the speed of the processing.

Complexity, on the other hand, is the theory of how efficient a computation is. Two equivalent computations could have very different complexity. That's the business about polynomial versus exponential time, for example. I hope you hve some familiarity with this idea. The difference between computability and complexity is important. 

Now when you say that a pile of switches could be conscious if only there were enough of them, or if they could only go fast enough, you are conceding my argument. You are admitting that consciousness is not a matter of computability, but rather of complexity.

You have just conceded my argument. If mind depends on the amount of circuits or the speed of processing, then by definition is it NOT COMPUTATIONAL. 

Do you follow this point? Supercomputers and pencil and paper executing the same algorithm are completely equivalent computationally; but not in terms of complexity. So if speed and amount of resources make a difference, it's not a computational problem. You just admitted that mind is not computational. 

 

 

 

7 hours ago, Eise said:

You use the right word: 'speculations'. when you know Searle's Chinese Room 'intuition pump', then you also should know it is intensely debated. 

I used Searle to refute your claim that these ideas were "premature." Since the ideas are at least forty years old, they can not be premature. 

Of course I know the Chinese room argument has given rise to forty years of impassioned debate. Which is exactly my point! How can it be "premature" if philosophers and cognitive scientists have been arguing it for forty years?

 

 

7 hours ago, Eise said:

Obviously you do not believe that there is a natural explanation of consciousness. And I think that is the real reason, that you see no problem with AI, because it will never reach this 'magical consciousness' that we have. Correct me if I am wrong.

You're wrong. I said "for sake of argument" to indicate that I'm agnostic on the issue and did not feel a need to take a stand one way or the other on that issue, in order to make my point about the (IMO) incomputability of mind. 

 

7 hours ago, Eise said:

And just to add: even if AI will never become conscious, that does not mean the people can make good use of this technology. I do not share your optimism:

In that respect it's no different than any other transformative technology. Fire lets us cook food and also commit arson. The printing press informs people of the truth and helps others broadcast lies.

So it will be with AI. Socially transformative, but not existential. It will be used for good, it will be used for evil. It will change society but it will not destroy it, any more than fire or the printing press or the Internet did.

In my opinion of course.

Link to comment
Share on other sites

8 hours ago, wtf said:

Your intent was perfectly clear.

Stop treating opinionated forecasts about potential futures as if they’re established fact and we’ll have no quarrel. 

8 hours ago, wtf said:

So it will be with AI. Socially transformative, but not existential.

There’s no way you know this nor possibly could, unless perhaps you also already know next week’s lottery numbers?

8 hours ago, wtf said:

In my opinion of course.

A humble one at that. 

Link to comment
Share on other sites

Fresh from the meeting:

Quote

Billionaire investor Charlie Munger expressed skepticism in response to a shareholder question on the future of artificial intelligence — though he admits it will rapidly transform many industries.

“We’re going to see a lot more robotics in the world,” Munger said. “I’m personally skeptical of some of the hype in AI. I think old fashioned intelligence works pretty well.”

Warren Buffett shared his view. While he expects AI will “change everything in the world,” he doesn’t think it will trump human intelligence.

Live Updates: Berkshire Hathaway Annual Meeting 2023 (cnbc.com)

Link to comment
Share on other sites

On 5/4/2023 at 11:57 PM, Alexdas said:

ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with the chatbot. The language model can answer questions and assist you with tasks, such as composing emails, essays, and code.

!

Moderator Note

If you quote an article from the Science Times, you MUST give them a citation when posting their work in your response, otherwise it's plagiarism.

 

https://www.sciencetimes.com/articles/42862/20230317/powerful-tools-to-enhance-and-assist-human-work-chatgpt.htm

Link to comment
Share on other sites

ChatGPT is like VOX in the following movie. Artificial Intelligence =/= Artificial Intellect. Artificial Intellect doesn't exist and never will because a machine cannot gain intellect. Many people sure confuse intelligence for intellect though.

 

Edited by genio
Link to comment
Share on other sites

On 5/5/2023 at 8:41 PM, wtf said:

Ahhhh, I've got you now! You are retreating to complexity, and backing off from computability.

I am not aware using the word 'computability'. Obviously you filled that in. If that helps: no, I do not think there will be an algorithm for consciousness. 'Complexity' surely is a much better description, even if it sounds more vaguely. But e.g. Daniel Dennett makes a well argued case in his Consciousness Explained, making it less vague than it sounds.

So if your elevator is just executing algorithms, without having these algorithms unwanted side effects, then it is not conscious. Exactly like a neuron, or a small set of neurons.

On 5/5/2023 at 8:41 PM, wtf said:

I used Searle to refute your claim that these ideas were "premature."

This is what I said:

On 5/4/2023 at 8:26 AM, Eise said:

it is premature to state that a huge system of flip-flops will not be able to experience meaning

Added bold. Does that say that there were no discussions about this topic? Nope. 

And panpsychism is not my cup of tea. Should we also adhere to 'panvivism'? Because living organisms exist, should we suppose that all atoms are at least a little bit alive? 

On 5/5/2023 at 8:41 PM, wtf said:

In that respect it's no different than any other transformative technology. Fire lets us cook food and also commit arson. The printing press informs people of the truth and helps others broadcast lies.

So it will be with AI. Socially transformative, but not existential. It will be used for good, it will be used for evil. It will change society but it will not destroy it, any more than fire or the printing press or the Internet did.

Maybe you should explain what 'existential' means.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.