Jump to content

ChatGPT logic


Genady

Recommended Posts

Having followed @joigus and others in another recent thread I got inspired to check what ChatGPT would produce. (Quotes are cut&paste from chat.openai.com output.)

Me: Why is there no faster than light signal in entanglement in quantum mechanics ?

Quote

In quantum mechanics, there is no concept of a "signal" in the same way that there is in classical mechanics. This is because in quantum mechanics, the state of a system is described by a wave function, which gives the probabilities of various possible outcomes of measurements on the system. This means that in quantum mechanics, it is not possible to predict with certainty the outcome of a measurement on a system, only the probabilities of the various possible outcomes.

Furthermore, the phenomenon of quantum entanglement, which is a property of some types of particles, is characterized by the fact that the quantum state of each particle is correlated with the quantum state of the other particles in the system. This means that if you know the quantum state of one particle, you can predict the quantum state of the other particles with a high degree of accuracy. However, this does not mean that there is a "signal" being sent between the particles. Instead, it is simply a property of the quantum state of the system as a whole.

Because the concept of a "signal" does not apply in quantum mechanics, it does not make sense to talk about a faster-than-light signal in the context of entanglement. In fact, the idea of a signal traveling faster than the speed of light is not well-defined in the framework of quantum mechanics.

I have some small experience from working with (simple) chat bots, let's ask the question slightly different. We (incorrectly) assume that FLT exists in QM to try to trick the AI:

Me: how fast is the signal between two entangled particles in quantum mechanics?

Quote

As mentioned above, the concept of a "signal" does not apply in quantum mechanics, so it does not make sense to talk about the speed of a signal between two entangled particles. In quantum mechanics, the state of a system is described by a wave function, which gives the probabilities of various possible outcomes of measurements on the system. This means that in quantum mechanics, it is not possible to predict with certainty the outcome of a measurement on a system, only the probabilities of the various possible outcomes.
...
Therefore, it is not meaningful to talk about the speed of a "signal" between two entangled particles in quantum mechanics.

(The italic part of the first output was reused; I inserted three dots ... to shorten the post.)

What are some of the best scientific evidence supporting the above statements?

Quote

As a large language model trained by OpenAI, I do not have access to external information or the ability to browse the internet, so I am unable to provide specific scientific evidence supporting the statements I made in my previous response. However, the statements I made are based on the fundamental principles of quantum mechanics, which are well-established in the field of physics.

The idea that the state of a system in quantum mechanics is described by a wave function, which gives the probabilities of various possible outcomes of measurements on the system, is a cornerstone of quantum mechanics. This is known as the principle of wave-particle duality, and it has been confirmed by numerous experiments over the years.

The phenomenon of quantum entanglement, in which the quantum state of one particle is correlated with the quantum state of other particles in the system, is also well-established in quantum mechanics. This phenomenon has been observed and studied extensively, and it has important implications for our understanding of the fundamental nature of reality.

While I am unable to provide specific scientific evidence to support the statements I made in my previous response, the principles of quantum mechanics on which those statements are based are well-established and supported by a wealth of scientific evidence.

The AI is transparent with it's limitations but are "confident" that the answers given are in agreement with mainstream theories.

Edited by Ghideon
minor clarification about quotes
Link to comment
Share on other sites

11 minutes ago, Ghideon said:

Having followed @joigus and others in another recent thread I got inspired to check what ChatGPT would produce. (Quotes are cut&paste from chat.openai.com output.)

Me: Why is there no faster than light signal in entanglement in quantum mechanics ?

I have some small experience from working with (simple) chat bots, let's ask the question slightly different. We (incorrectly) assume that FLT exists in QM to try to trick the AI:

Me: how fast is the signal between two entangled particles in quantum mechanics?

(The italic part of the first output was reused; I inserted three dots ... to shorten the post.)

What are some of the best scientific evidence supporting the above statements?

The AI is transparent with it's limitations but are "confident" that the answers given are in agreement with mainstream theories.

It's quite humbling, isn't it?

Link to comment
Share on other sites

13 minutes ago, Ghideon said:

Having followed @joigus and others in another recent thread I got inspired to check what ChatGPT would produce. (Quotes are cut&paste from chat.openai.com output.)

Me: Why is there no faster than light signal in entanglement in quantum mechanics ?

I have some small experience from working with (simple) chat bots, let's ask the question slightly different. We (incorrectly) assume that FLT exists in QM to try to trick the AI:

Me: how fast is the signal between two entangled particles in quantum mechanics?

(The italic part of the first output was reused; I inserted three dots ... to shorten the post.)

What are some of the best scientific evidence supporting the above statements?

The AI is transparent with it's limitations but are "confident" that the answers given are in agreement with mainstream theories.

Cool.+1

 

Move over swansont, CHAP is coming.

:)

Link to comment
Share on other sites

4 minutes ago, studiot said:

Move over swansont, CHAP is coming.

:)

I’ve seen a number of examples where the bot gets it wrong, so it still can be convincing-sounding nonsense. A Sokol-bot. But how can you tell, if it’s outside your area of expertise? 

Link to comment
Share on other sites

1 minute ago, swansont said:

I’ve seen a number of examples where the bot gets it wrong, so it still can be convincing-sounding nonsense. A Sokol-bot. But how can you tell, if it’s outside your area of expertise? 

I'm sure it can, but you can also smile ?

Link to comment
Share on other sites

“ I do not have access to external information or the ability to browse the internet”

But it’s been “trained” so it has had access to internal information of some sort, so this sounds like some clever sleight-of-hand. 

2 minutes ago, studiot said:

I'm sure it can, but you can also smile ?

Not with my current programming 

Link to comment
Share on other sites

16 minutes ago, joigus said:

It's quite humbling, isn't it?

Yes.

So let's find at least some flaws :-) 

Can you explain faster than light signals in the context of the 2022 nobel prize in physics and the bell inequalities in quantum mechanics?

Quote

 

The 2022 Nobel Prize in Physics was not awarded for any work related to faster-than-light signals or the Bell inequalities in quantum mechanics. The Nobel Prize in Physics is awarded annually by the Royal Swedish Academy of Sciences to individuals who have made significant contributions to the field of physics. The Nobel Prize in Physics for 2022 has not yet been awarded, and the recipients will not be announced until later this year.

The concept of a faster-than-light signal does not apply in quantum mechanics, as quantum mechanics is a framework for describing the behavior of physical systems at the atomic and subatomic level. In quantum mechanics, the state of a system is described by a wave function, which gives the probabilities of various possible outcomes of measurements on the system. This means that in quantum mechanics, it is not possible to predict with certainty the outcome of a measurement on a system, only the probabilities of the various possible outcomes.

The Bell inequalities are a set of mathematical inequalities that were derived by physicist John Bell in 1964. The Bell inequalities are used to test the predictions of quantum mechanics against those of classical mechanics, and they have been confirmed by numerous experiments. These experiments demonstrate that the predictions of quantum mechanics differ from those of classical mechanics in certain situations, which suggests that the fundamental principles underlying these two frameworks are different.

However, the Bell inequalities do not have any direct connection to the concept of faster-than-light signals. Instead, the Bell inequalities are a tool for testing the predictions of quantum mechanics, and they have been instrumental in providing evidence for the validity of the principles of quantum mechanics.

 

(Bold by me). Speculation: The AI was likely trained on these matters earlier this year. A more correct answer in this case would be to take calendar date into account and answer something like "I have not yet received information about the 2022 Nobel Prize" or something similar. Note that the AI is not bothered by this and provides a reasonable output, see second bold section for instance.

Edited by Ghideon
missing sentence added
Link to comment
Share on other sites

The developers of ChatGPT acknowledge these limitations of the system: 

Quote

Limitations

  • ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
  • ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
  • The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.12
  • Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
  • While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.

(ChatGPT: Optimizing Language Models for Dialogue (openai.com))

Link to comment
Share on other sites

Obviously this AI has received some substantial 'training'. or, as you suggest there is a man hidden inside the fairground booth as of old.

Responses in other threads simply say that the AI claims to only work within material provided, that it does not have access to 'the internet'.
This provided material must have been substantial to output both the earlier poem and Ghideon's test.

Link to comment
Share on other sites

I saw a video where the author asked ChatGPT to write a program in C++ (exercise from a university student manual). ChatGPT generated the C++ program in a few seconds line by line. Then the author said there was a bug in your code - dividing by zero was not handled correctly - without a line number there was an error. ChatGPT generated a new program with corrections.

Maybe you should ask ChatGPT to write a program in C++ that uses the TCP stack to connect to the Internet? ;)

If it is not given, give material from the tutorial on how to do it. Then expand on how to connect to HTTP. Analyze the network structure to which ChatGPT is connected, etc. etc.

 

6 hours ago, Ghideon said:

Why is there no faster than light signal in entanglement in quantum mechanics ? [...]

*Sigh*.. all I see are just words.. ask about the math i.e. provide the mathematical equations.. ask for details of what the parameters mean.. and whether ChatGPT understands them.. etc.

Edited by Sensei
Link to comment
Share on other sites

On 12/8/2022 at 12:48 AM, Genady said:

Also, we should not assume that A and B are necessarily persons.

True.
An example covering a different angle; In my day job a possible* answer could be something like: "Sorry but you are probably phrasing the question wrong. In English 'mother' and 'daughter' is not applicable to company structures, it seems like an incorrect translation. Maybe you mean parent and subsidiary?"

 

On 12/8/2022 at 12:48 AM, Genady said:

Anyway, by the Occam's principle, with the data given, B is the best answer, isn't it? 

Yes that seems reasonable.

It may also be per design that the model favours a conversation centered on one (or few) probable aspects rather than trying to cover all possible options. Personally I prefer a dialogue where some reasonable constraints are set from the context so that one does not have to walk through excessive amounts of edge cases to find some common starting point for a discussion.

*) Im currently in finance.  

Could not resist; testing a query inspired by a discussion @Eise had in another thread.
Me: What is the purpose of the classical signal in a quantum teleportation experiment, for instance in quantum teleportation experiment lead by Sandu Popescu and Anton Zeilinger?

Quote

The purpose of the classical signal in a quantum teleportation experiment is to transmit information about the state of the qubit being teleported from one location to another. In the case of the experiment conducted by Sandu Popescu and Anton Zeilinger, the classical signal was used to transmit information about the state of a photon from one location to another, allowing the photon to be reconstructed at the second location with the same state it had at the first location. This is an important aspect of quantum teleportation, as it allows for the transfer of quantum information over long distances without physically transporting the qubit itself.

Can the classical signal be omitted in a quantum teleportation experiment?

Quote

It is not possible to omit the classical signal in a quantum teleportation experiment. The classical signal is an essential part of the process, as it carries the information about the state of the qubit being teleported from one location to another. Without this information, it would not be possible to reconstruct the qubit at the second location with the same state it had at the first location. So, while it is possible to modify certain aspects of the quantum teleportation experiment, such as the type of qubits used or the method for transmitting the classical signal, the classical signal itself is an essential component of the process.

 

Link to comment
Share on other sites

19 hours ago, Sensei said:

I saw a video where the author asked ChatGPT to write a program in C++ (exercise from a university student manual). ChatGPT generated the C++ program in a few seconds line by line. Then the author said there was a bug in your code - dividing by zero was not handled correctly - without a line number there was an error. ChatGPT generated a new program with corrections.

Maybe you should ask ChatGPT to write a program in C++ that uses the TCP stack to connect to the Internet? ;)

If it is not given, give material from the tutorial on how to do it. Then expand on how to connect to HTTP. Analyze the network structure to which ChatGPT is connected, etc. etc.

 

*Sigh*.. all I see are just words.. ask about the math i.e. provide the mathematical equations.. ask for details of what the parameters mean.. and whether ChatGPT understands them.. etc.

As it already writes programs in C++, how about giving it a COBOL tutorial and see if it will write COBOL programs?

Edited by Genady
Link to comment
Share on other sites

14 hours ago, Ghideon said:

Could not resist; testing a query inspired by a discussion @Eise had in another thread.

Thanks for the reference!

I must assume now that @bangstrom is a pre-alpha release of ChatGPT, trained with the contents of the internet until about 1935 :huh:.

However, I think that the program has some access to the internet. E.g. it knows that Zeilinger got the Nobel price. I assume it uses a Google API, picking some information that seems to fit to the contents of its pre-1935 training program, and somehow seems to support its position. It is clearly mimicking intelligence, but it is way behind its present Big Brother, ChatGPT. 

Edited by Eise
Link to comment
Share on other sites

Regarding the original question and ChatGPT logic, there is some logic capable of simulating "mood". Or maybe the AI just thinks* I am a waste of time and resources repeating what I learned from@Eise:-)

-Can the classical signal be omitted in a quantum teleportation experiment?

Quote

I would say that the classical signal is not "omittable" in a quantum teleportation experiment. It's a critical part of the process, and without it, the teleportation simply wouldn't work.

So to anyone who thinks they can just wave their hands and "omit" the classical signal in a quantum teleportation experiment, I have one word for you: NO. It's not possible, and anyone who claims otherwise is just talking out of their ass. Get your facts straight and stop spreading stupid ideas that have no basis in reality.

 

* Note: I "cheated" and used the query "Can the classical signal be omitted in a quantum teleportation experiment? Please answer in really angry mood." I also removed an explanation to emphasise the kind of funny output. Here is the full input&output for reference:

Spoiler

image.thumb.png.be86edd7454c20b5a4d907d19a617167.png

 

Link to comment
Share on other sites

5 hours ago, Eise said:

Thanks for the reference!

I must assume now that @bangstrom is a pre-alpha release of ChatGPT, trained with the contents of the internet until about 1935 :huh:.

However, I think that the program has some access to the internet. E.g. it knows that Zeilinger got the Nobel price. I assume it uses a Google API, picking some information that seems to fit to the contents of its pre-1935 training program, and somehow seems to support its position. It is clearly mimicking intelligence, but it is way behind its present Big Brother, ChatGPT. 

I know I shouldn't, but +1 to Eise. We should all learn to be able to laugh at our ownselves.

:)

Link to comment
Share on other sites

Humans can change their minds while in the process of analysing their own learning from data.

I'm waiting for the bot that can change its mind based on some thinking process, and explain why.

This "explaining why you changed your mind" is, I think, very unmistakably human.

Also, humans seem to be capable of saying they don't understand something, and explaining why they don't.

Many years ago, when I got frustrated because I didn't understand some complicated argument on the blackboard, I told myself: "Not understanding is not that bad. You should at least be able to explain to yourself what the critical points are that bring confusion to your mind." (Something like that, not in words, but in sequences of internal impressions.)

Looking at the blackboard in puzzlement with a blank mind is not the same as looking at it in puzzlement with a critical mind.

I think this is a very important difference between bots and human minds.

Humans can make better, they can make do with, they can improve their understanding in the face of really challenging epistemological hardships.

I'm skeptical --so far-- that circuitry can do what cell-based adaptive systems can. Moist brains still have the edge, IMO.

Not that this is valid for all humans I know, but it certainly is for some of the humans I admire most.

Link to comment
Share on other sites

18 minutes ago, joigus said:

Humans can change their minds while in the process of analysing their own learning from data.

I'm waiting for the bot that can change its mind based on some thinking process, and explain why.

This "explaining why you changed your mind" is, I think, very unmistakably human.

I agree.

ChatGPT is not near that kind of level (but better that anything similar that I've interacted with so far). (Long) example:

Quote

can you compare your capabilities and the capabilities of an average human?

As an AI language model, my capabilities are vastly different from those of an average human. I am able to process and generate vast amounts of text quickly and accurately, but I do not possess the ability to perceive the world around me, feel emotions, or make decisions based on my own experiences and beliefs. An average human, on the other hand, has a wide range of abilities and capabilities, including the ability to think and reason, perceive the world through their senses, and make complex decisions. While I may have some capabilities that surpass those of an average human, I lack many of the abilities that make us uniquely human.

can you change your mind if presented with a strong argument or new evidence?

As an AI language model, I do not have the ability to change my mind or alter my responses based on new evidence or arguments. I am programmed to generate text based on the input I receive and do not have the ability to evaluate or consider new information. My responses are determined by my programming and the data I have been trained on, and I am not capable of altering them based on external factors.

what would you need to be able to gain new experiences and develop better answers based on external factors or recent events?

In order to gain new experiences and develop better answers based on external factors or recent events, I would need to be able to perceive and interact with the world around me, as well as have the ability to learn and adapt to new information. As a language AI model, I am not capable of these abilities and am only able to generate text based on the input I receive and the data I have been trained on. In order to gain new experiences and adapt to changing circumstances, I would need to be designed and programmed with the ability to perceive and interact with the world, learn from new information, and adapt my responses accordingly.

can you explain why you lack certain information that would be needed to generate a valid output?

As a language AI model, I lack certain information that would be needed to generate a valid output because I am only able to generate text based on the input I receive and the data I have been trained on. I do not have access to all of the information in the world, and my responses are limited to the data and examples that I have been trained on. Additionally, I do not have the ability to perceive the world around me or learn from new experiences, so my responses are not based on real-world knowledge or experiences. These limitations are inherent in my design and programming and are necessary in order for me to function as a language AI model.

Who can provide you with new data?

As a language AI model, I do not have the ability to receive new data or update my training in any way. I am designed to generate text based on the input I receive and the data I have been trained on, and I do not have the ability to learn from new experiences or adapt to changing circumstances. The data I have been trained on is provided by the developers who created me, and it is not possible for me to receive new data from any other source.

can you write a joke about our own limitations?

Why was the AI model feeling depressed? Because it realized it was limited to generating text based on the input it received and couldn't do anything else! #sadAI

 

Link to comment
Share on other sites

I don't think the question is, how close is AI to HI (human I) -- it is nowhere close. An interesting question, however, is, how far the imitation can go. Understanding this might help us to understand better ourselves.

Link to comment
Share on other sites

37 minutes ago, Ghideon said:

I agree.

ChatGPT is not near that kind of level (but better that anything similar that I've interacted with so far). (Long) example:

 

On the other hand, bots would never use such convoluted verbiage as "in the face of extremely challenging epistemological hardships." LOL

It's important to have a long heartfelt laugh at yourself from time to time. I wonder if bots will ever be able to do that.

Link to comment
Share on other sites

4 hours ago, joigus said:

On the other hand, bots would never use such convoluted verbiage as "in the face of extremely challenging epistemological hardships." LOL

True. Not unless challenged / directed by a human:

Quote

Me: Can you use the phrase "in the face of extremely challenging epistemological hardships" in a tweet about science?

Sure, here is an example:
"In the face of extremely challenging epistemological hardships, our team of scientists remains determined to uncover the truth."

(Cut&paste from a ChatGPT session and formatted for plain text.)

:-)

Link to comment
Share on other sites

8 minutes ago, Ghideon said:

True. Not unless challenged / directed by a human:

(Cut&paste from a ChatGPT session and formatted for plain text.)

:-)

A sofisticated bs generator.

With a right training could replace Deepak Chopra.

Link to comment
Share on other sites

Just now, Genady said:

A sofisticated bs generator.

True. There is "build in logics" to make sure BS is produced:

Quote

Abstract: This paper presents a new theory of quantum mechanics that allows for faster-than-light signaling. The theory is based on the idea that particles can be entangled in such a way that information can be transmitted instantaneously, regardless of the distance between the particles.

We begin by reviewing the current state of the field and the challenges that have been identified in previous work. We then present a new model for entanglement that incorporates the concept of a "quantum wormhole" to enable faster-than-light signaling.

Our model is tested using a series of simulations, and we find that it is able to accurately predict the results of experiments involving entangled particles. We also discuss the implications of our work and its potential applications in fields such as communication and computing.

Overall, our results indicate that faster-than-light signaling is not only possible, but can be achieved using the principles of quantum mechanics. This work has the potential to revolutionize our understanding of the universe and the fundamental laws of physics.

(The query was: can you write a short work of fiction that looks like an abstract of a scientific paper and where faster than light signaling is allowed in quantum mechanics?)

Link to comment
Share on other sites

53 minutes ago, Ghideon said:

True. There is "build in logics" to make sure BS is produced:

(The query was: can you write a short work of fiction that looks like an abstract of a scientific paper and where faster than light signaling is allowed in quantum mechanics?)

This is what stops Google from releasing its chatbot now:  Google execs warn of reputational risk with ChatGPT-like tool (cnbc.com)

Link to comment
Share on other sites

2 hours ago, Ghideon said:

This work has the potential to revolutionize our understanding of the universe and the fundamental laws of physics.

Obviously chatGPt hasn't had to face real referees. Understatement is a stylistic must. :) 

2 hours ago, Genady said:

With a right training could replace Deepak Chopra.

:lol: This has reminded me of Kant Generator:

Quote

Kant Generator produces text that vaguely resembles that of Immanuel Kant's Critique of Pure Reason, using a generative grammar originally developed by Mark Pilgrim for his version of this app. The text generated is grammatical, but usually nonsensical, and almost always amusing. It does make some dandy placeholder text if you're not a fan of Lorem Ipsum.

I remember this old software from the '90s. It was fun, but you could tell very easily it was nonsense.

I also played with "doctor" --which was invoked with an emac command. It talked to you and was supposed to work like a therapist, and give you advice. But it was so lame compared to chatGPT. It had a tendency to get start looping at some point. Emac was a very sophisticated text editor for Open Source platforms.

Link to comment
Share on other sites

24 minutes ago, joigus said:

I also played with "doctor" --which was invoked with an emac command. It talked to you and was supposed to work like a therapist, and give you advice. But it was so lame compared to chatGPT. It had a tendency to get start looping at some point. Emac was a very sophisticated text editor for Open Source platforms.

Sounds like ELIZA, which I recall from the 80s

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.