Jump to content

ChatGPT logic


Genady

Recommended Posts

As I see it there are several possibilities not considered, either in the question or in the answer.

My (correct) answer is:   Since A's mother has not been declared alive, no one is a possible answer if A's mother is now deceased.

 

Link to comment
Share on other sites

Good question.  I don't know why we should assume B is male.  Or that, another possibility, B could not be a female of reproductive age who had a child on her own using artificial insemination or IVF.

As for deceased, my mother is still my mother though she is deceased.  I hope the linguistic basis for this is clear: mother is a term that defines a relationship, even if it was in the past.

Link to comment
Share on other sites

Also, we should not assume that A and B are necessarily persons. If they are, say, cells or some asexually reproducing organisms, then B is the only answer.

Anyway, by the Occam's principle, with the data given, B is the best answer, isn't it? 

Update.

With the follow-up questions, this AI becomes ridiculous and self-contradictory:

image.png.e126e2ef29bbc0c486dd9660cfbbd4ff.png

Link to comment
Share on other sites

10 hours ago, TheVat said:

Good question.  I don't know why we should assume B is male.  Or that, another possibility, B could not be a female of reproductive age who had a child on her own using artificial insemination or IVF.

As for deceased, my mother is still my mother though she is deceased.  I hope the linguistic basis for this is clear: mother is a term that defines a relationship, even if it was in the past.

Well, as I said, I disagree, though it is a fine linguistic point.

The correct tense to use would be past not present.

However we all seem agreed that there are plenty of different possibilities.

Link to comment
Share on other sites

I agree that there are fine linguistic points to be made, involving among other things, whether we are allowed to extend the possibilities to cells, or to deceased people, etc.

But language, ordinary language, has a lot of context attached to it that results in the answering party filtering out possible answers that probably are not relevant to what the asking party wants to know.

I would therefore address the apparent "bug" that the system ignores the obvious answer, provided B is a woman, which is what I find most interesting.

My guess would be that AI systems learn by experience, and we in our roles of experience-based learning machines --and AI engines try to mimic us in a way-- rarely are fed questions of which the answer is implied in the question, so the system has not been fed enough statistics to face a situation in which the answer is implicit in the question itself. Or not often enough.

In Spanish we have this joke --that you normally play on kids-- of asking "What colour is St James' white horse?" My father was particularly fond of "Who's Zebedee's daughters' father?"

Kids do not expect the answer to be implied by the question, so sometimes get confused. Maybe AI systems can suffer from some version of this glitch that seems to be rather based on what you expect a question to be about than on a clean logical parsing of said question.

And the reason may well be that the AI engine, as kids do too, bases its "expectations" on previous experience, and thus approaches the question based on these "expectations."

Link to comment
Share on other sites

23 minutes ago, Genady said:

I like this guess, @joigus. Here is a little evidence supporting it:

image.png.984f45208ca1a94b561c213a3dcb02b4.png

Well done! You've just conducted an experiment to test the hypothesis. The chat engine is clearly assuming something --B's sex-- that's not literally implied by the question.

It seems as though the system is assuming the answer must be based on a syllogism, not a "loop," or a truth to be derived from the question itself.

It's good to have you back, BTW.

I wonder if there's a way to guarantee that's what's going on here.

Edited by joigus
minor correction
Link to comment
Share on other sites

1 hour ago, Genady said:

I like this guess, @joigus. Here is a little evidence supporting it:

image.png.984f45208ca1a94b561c213a3dcb02b4.png

Going back to the original. 

 

In this modern day and age of AI's, surely AI's (along with everybody else) should be asware that B may not be deceased, but simply no longer a woman ?

Furthermore in many countries the terms husband and wife are now blurred by same sex marriages.

So my comment about was still stands

16 hours ago, Genady said:

I've asked ChatGPT a question and got an answer, which is correct, but ... Here it is:

image.png.f6660250a6da7b4c504c978a312e2d1b.png

Why doesn't it consider B herself?

 

 

By the way can somebody enlighten me as to what is CHATGPT  please?

Link to comment
Share on other sites

2 hours ago, Genady said:

@joigus, it doesn't look like a result of logical assumptions, because on one hand, it derives truth from the question itself in this example:

image.png.3073129840a612cbdc2af31f1e25d784.png

and on the other hand, it is incapable of a simple syllogism in this example:

 image.png.cc1003b193da86aa044ea9607ae12379.png

But I didn't mean that it derives its conclusions from pure logical assumptions. I meant the opposite: That there's an apparent element of empiricism, as is to be expected from a machine that learns from experience:

5 hours ago, joigus said:

My guess would be that AI systems learn by experience, [...]

 

Link to comment
Share on other sites

Yes, @joigus, the experience, i.e., the training statistics emphasis of your hypothesis seems to me a right way to analyze this behavior. It was the following specification that looks unsupported:

5 hours ago, joigus said:

It seems as though the system is assuming the answer must be based on a syllogism, not a "loop," or a truth to be derived from the question itself.

 

Link to comment
Share on other sites

5 hours ago, Genady said:

@studiot, there are thousands of articles about ChatGPT, here is one from the horse's mouth: ChatGPT: Optimizing Language Models for Dialogue (openai.com)

Thanks for the info.

 

So am I right in assuming that

Your red box denotes an input question

and your green box denotes the AI response ?

 

It seems to me that the AI is conditioned to always give an answer, unlike a human.

Isn't this a drawback ?

Link to comment
Share on other sites

@joigus I guess that the crucial difference between a human and the ChatGPT's experiences is in their context: the latter is an experience of language, while the former is an experience of language-in-real-life. For example, we easily visualize a daughter and her mother, and in this mental picture the mother is clearly older than the daughter. The ChatGPT instead knows only how age comparison appeared in texts.

 

27 minutes ago, studiot said:

Thanks for the info.

 

So am I right in assuming that

Your red box denotes an input question

and your green box denotes the AI response ?

 

It seems to me that the AI is conditioned to always give an answer, unlike a human.

Isn't this a drawback ?

Yes, you're right: the red box denotes what I say and the green one denotes what AI says.

No, sometimes it says that it cannot answer, with some explanation why. 

Edited by Genady
Link to comment
Share on other sites

1 hour ago, Genady said:

Yes, @joigus, the experience, i.e., the training statistics emphasis of your hypothesis seems to me a right way to analyze this behavior. It was the following specification that looks unsupported:

 

Oh, I see. "Assuming a syllogism" was a bad choice of words. With this "assuming a syllogism" I was referring to the illusion it creates, IMO. But the system is not thinking logically, at least not a 100% so. The only logic is a logic of "most trodden paths" so to speak.

I may be wrong, of course. Perhaps modern AI implements modules of propositional logic in some way. I'm no expert. 😊

I liked your "experiments" anyway.

Link to comment
Share on other sites

1 hour ago, Genady said:

Yes, you're right: the red box denotes what I say and the green one denotes what AI says.

No, sometimes it says that it cannot answer, with some explanation why. 

Noted thanks.

1 hour ago, joigus said:

I liked your "experiments" anyway.

Yes, I am watching with interest and learning lots as I don't really know much about AI.

Link to comment
Share on other sites

11 hours ago, TheVat said:

So AI can now compose dull poetry.  That said, the line 

discussions and debates that never end

seems uncannily accurate!  😀

 

 

 

 

We can conclude that this is a generic feature of science forums because this guy doesn't know anything about scienceforums.net specifically:

image.png.d184ef67c453be86adb2c3b2c0ca8088.png

Link to comment
Share on other sites

Has anyone tried repeating the same question multiple times? If chatGPT works in a similar manner to GPT3 it's sampling from a distribution of possible tokens (not quite letters/punctuation) at every token. There's also a parameter, T, which allows the model to preferentially sample from the tails to give less likely answers.

Link to comment
Share on other sites

53 minutes ago, Prometheus said:

Has anyone tried repeating the same question multiple times? If chatGPT works in a similar manner to GPT3 it's sampling from a distribution of possible tokens (not quite letters/punctuation) at every token. There's also a parameter, T, which allows the model to preferentially sample from the tails to give less likely answers.

Yes, I have. The answers differed in wording, not in content.

Link to comment
Share on other sites

8 hours ago, Prometheus said:

Has anyone tried repeating the same question multiple times? If chatGPT works in a similar manner to GPT3 it's sampling from a distribution of possible tokens (not quite letters/punctuation) at every token. There's also a parameter, T, which allows the model to preferentially sample from the tails to give less likely answers.

I had it write me a resume today as a test. Just told it what job and level it was for and that I wanted my resume to successfully get through the AI screening programs recruiters today so often use before actually looking at the submissions. It was solid. 

Link to comment
Share on other sites

9 hours ago, iNow said:

I had it write me a resume today as a test. Just told it what job and level it was for and that I wanted my resume to successfully get through the AI screening programs recruiters today so often use before actually looking at the submissions. It was solid. 

interesting. +1

Link to comment
Share on other sites

On 12/9/2022 at 10:46 PM, iNow said:

I had it write me a resume today as a test. Just told it what job and level it was for and that I wanted my resume to successfully get through the AI screening programs recruiters today so often use before actually looking at the submissions. It was solid. 

This article lists "best" uses for ChatGPT, and the last one is similar to what you did, I think. It also links to another article, about ChatGPT's limitations. 

The 5 Best Uses (So Far) for ChatGPT's AI Chatbot (cnet.com)

Edited by Genady
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.