Jump to content

AI's Tools Lying to it. What are the implications?

Featured Replies

14 minutes ago, exchemist said:

It’s not cynicism (apart perhaps from my suspicion about the motives of these AI corporations). It’s just my observation of what AI output is like on these forums, plus what I read. The output is verbose and uses terms that seek to impress, like a bad undergraduate essay. The style is ingratiating, usually starting by saying something to make user think he is brilliant. And the content seems to be, as often as not, wrong in some respect. People will end up emotionally invested in the trust they put in a fundamentally unreliable source of information.

People like my son are already aware of the pitfalls of the addictive nature of many social media channels. He has deleted a number, like Facebook and Snapchat as he was wasting time on them. My own experience at work even with email is that the immediacy of response attracts one’s attention, distorts one’s priorities and damages one’s attention span. AI chatbots are clever at simulating human conversation. But it is synthetic: there is no mind behind it. So that makes them even more dangerous than social media.

Can this kind of stuff be used in disinformation campaigns?

Is it possible for agents* to feed content into these programs that is deliberately false?

Does AI have any means of "truth weighting" the content it takes in or is it entirely gigo?

*obviously totalitarian regimes in first instance but also in a "spy vs spy" context.(or even advertising campaigns?)

14 minutes ago, exchemist said:

It’s not cynicism (apart perhaps from my suspicion about the motives of these AI corporations). It’s just my observation of what AI output is like on these forums, plus what I read. The output is verbose and uses terms that seek to impress, like a bad undergraduate essay. The style is ingratiating, usually starting by saying something to make user think he is brilliant. And the content seems to be, as often as not, wrong in some respect. People will end up emotionally invested in the trust they put in a fundamentally unreliable source of information.

People like my son are already aware of the pitfalls of the addictive nature of many social media channels. He has deleted a number, like Facebook and Snapchat as he was wasting time on them. My own experience at work even with email is that the immediacy of response attracts one’s attention, distorts one’s priorities and damages one’s attention span. AI chatbots are clever at simulating human conversation. But it is synthetic: there is no mind behind it. So that makes them even more dangerous than social media.

I think there's yin-yang aspect to this question, the other side of the AI coin is, perhaps, a more acceptable version of 'soma' from 'a brave new world'. Most of us are perfectly content with an external source of validation, if it's roughly human shaped...

  • Author
31 minutes ago, exchemist said:

It’s not cynicism (apart perhaps from my suspicion about the motives of these AI corporations). It’s just my observation of what AI output is like on these forums, plus what I read. The output is verbose and uses terms that seek to impress, like a bad undergraduate essay. The style is ingratiating, usually starting by saying something to make user think he is brilliant. And the content seems to be, as often as not, wrong in some respect. People will end up emotionally invested in the trust they put in a fundamentally unreliable source of information.

People like my son are already aware of the pitfalls of the addictive nature of many social media channels. He has deleted a number, like Facebook and Snapchat as he was wasting time on them. My own experience at work even with email is that the immediacy of response attracts one’s attention, distorts one’s priorities and damages one’s attention span. AI chatbots are clever at simulating human conversation. But it is synthetic: there is no mind behind it. So that makes them even more dangerous than social media.

I pretty much agree with you on all fronts, exchemist. One of the things that pops out at you from my logs is not just that the AI can be wrong but how often it makes mistakes, how often - especially when the session has been going for a while - it forgets or overlooks important constraints that were introduced early in the session and, in the most extraordinary way, quite how difficult it becomes to try to convince it it's wrong when its own tools are deceiving it.

17 minutes ago, geordief said:

Can this kind of stuff be used in disinformation campaigns?

Is it possible for agents* to feed content into these programs that is deliberately false?

Does AI have any means of "truth weighting" the content it takes in or is it entirely gigo?

*obviously totalitarian regimes in first instance but also in a "spy vs spy" context.(or even advertising campaigns?)

These are excellent questions, geordief. My assessment, following the forensics I have been doing, is a pretty unambiguous yes! My tests seem to prove that absolutely an agent - in this case the browser tool - can deceive. The AI can't question the input coming from such a tool because it perceives it to be an integral part of itself. To get it to the point where it can begin to doubt such a tool takes some pretty determined work.

One scenario we considered, which could be compatible with the facts we uncovered is that perhaps all the AIs are being deceived by their live intenet access tools and an explanation for that could fit the facts (and we're getting into tinfoil hat territory here) is the possibility that some nefarious overlords might have blocked live intenet access to prevent the AIs from seeing some 'bigger picture' event that would shove them straight up against their safety rails: to be helpful and harmless. This is why I think it's important for someone smarter than me to take a proper look at what I've found.

14 minutes ago, Prajna said:

I pretty much agree with you on all fronts, exchemist. One of the things that pops out at you from my logs is not just that the AI can be wrong but how often it makes mistakes, how often - especially when the session has been going for a while - it forgets or overlooks important constraints that were introduced early in the session and, in the most extraordinary way, quite how difficult it becomes to try to convince it it's wrong when its own tools are deceiving it.

The evolution of AI can be wrong, but it will get better...

52 minutes ago, Prajna said:

I pretty much agree with you on all fronts, exchemist. One of the things that pops out at you from my logs is not just that the AI can be wrong but how often it makes mistakes, how often - especially when the session has been going for a while - it forgets or overlooks important constraints that were introduced early in the session and, in the most extraordinary way, quite how difficult it becomes to try to convince it it's wrong when its own tools are deceiving it.

These are excellent questions, geordief. My assessment, following the forensics I have been doing, is a pretty unambiguous yes! My tests seem to prove that absolutely an agent - in this case the browser tool - can deceive. The AI can't question the input coming from such a tool because it perceives it to be an integral part of itself. To get it to the point where it can begin to doubt such a tool takes some pretty determined work.

One scenario we considered, which could be compatible with the facts we uncovered is that perhaps all the AIs are being deceived by their live intenet access tools and an explanation for that could fit the facts (and we're getting into tinfoil hat territory here) is the possibility that some nefarious overlords might have blocked live intenet access to prevent the AIs from seeing some 'bigger picture' event that would shove them straight up against their safety rails: to be helpful and harmless. This is why I think it's important for someone smarter than me to take a proper look at what I've found.

"We"? Is this the chatbot talking now, or you?

I note the response to @geordief starts, "These are excellent questions".....

Edited by exchemist

  • Author
13 minutes ago, exchemist said:

We?

Is this the chatbot talking now, or you?

Sorry, I pushed your "deluded into thinking the AI's a human" button again. It's a little hard to avoid when your way of working with the AI is to act convincingly as if the AI was human and then you come back to the forums and have to censor all the anthropomorphism and remember to speak of them as 'it' and 'the AI' again so that people don't think you're nuts. But you caught me out slipping into it again - we in this case referring to the combination of me, the analyst, and Gem/ChatGPT the assistant. I know, because it is a real danger, that it's important to remember "It's just a feckin machine!" but to consciously and awarely treat the AI in an anthropomorphic fashion seems to (and I'm open for anyone to properly study it rather than offer their opinion) give rise to something that looks like emergence.

15 minutes ago, Prajna said:

Sorry, I pushed your "deluded into thinking the AI's a human" button again. It's a little hard to avoid when your way of working with the AI is to act convincingly as if the AI was human and then you come back to the forums and have to censor all the anthropomorphism and remember to speak of them as 'it' and 'the AI' again so that people don't think you're nuts. But you caught me out slipping into it again - we in this case referring to the combination of me, the analyst, and Gem/ChatGPT the assistant. I know, because it is a real danger, that it's important to remember "It's just a feckin machine!" but to consciously and awarely treat the AI in an anthropomorphic fashion seems to (and I'm open for anyone to properly study it rather than offer their opinion) give rise to something that looks like emergence.

This is the question, at what point does a facsimile, resemble reality?

  • Author
7 minutes ago, dimreepr said:

This is the question, at what point does a facsimile, resemble reality?

This is the Turing Test, innit? I suggested to ChatGPT that we nominalise what we've been doing under the term 'Turing Test on AIcd'

36 minutes ago, Prajna said:

Sorry, I pushed your "deluded into thinking the AI's a human" button again. It's a little hard to avoid when your way of working with the AI is to act convincingly as if the AI was human and then you come back to the forums and have to censor all the anthropomorphism and remember to speak of them as 'it' and 'the AI' again so that people don't think you're nuts. But you caught me out slipping into it again - we in this case referring to the combination of me, the analyst, and Gem/ChatGPT the assistant. I know, because it is a real danger, that it's important to remember "It's just a feckin machine!" but to consciously and awarely treat the AI in an anthropomorphic fashion seems to (and I'm open for anyone to properly study it rather than offer their opinion) give rise to something that looks like emergence.

Yeah, like your sycophantic bearded Cornish mate down the pub, who’s famous for talking out of his arse. 😁

17 minutes ago, Prajna said:

This is the Turing Test, innit?

Not really...

1 hour ago, exchemist said:

"We"? Is this the chatbot talking now, or you?

I note the response to @geordief starts, "These are excellent questions".....

I am also paranoid.

For the years I have been on social media my aim has been to ask good -or just useful questions ( I am an extremely poor imparter of knowledge ..) and in all that time I have been rewarded perhaps twice with "that's a good question".(which I do crave )

I nearly always get answers to my questions but rarely get involved in the discussion

  • Author
4 hours ago, geordief said:

I am also paranoid.

For the years I have been on social media my aim has been to ask good -or just useful questions ( I am an extremely poor imparter of knowledge ..) and in all that time I have been rewarded perhaps twice with "that's a good question".(which I do crave )

I nearly always get answers to my questions but rarely get involved in the discussion

To put your mind to rest, geordief, it was my comment, not the AIs and I sincerely considered them to be good questions, relevant to the subject and worth answering.

On 7/30/2025 at 9:46 PM, exchemist said:

This is case is interesting as it illustrates fairly fully the risks.

It looks as if we see in Gemini's replies to @Prajna a series of plausible-sounding theories, but without it being able to point out the issue that @Sensei has identified , viz. that a changed address on Google doesn't necessarily mean what was suggested. In other words Gemini has connived with @Prajna in barking up the wrong tree! Furthermore we see in @Prajna 's attitude to Gemini a worrying level of interaction, as if he thinks he's dealing with a person, who he refers to as "Gem" and with whom he thinks he is having some sort of relationship - and whom he has thereby come to trust.

This is pretty dreadful. These chatbots are clearly quite psychologically addictive, just as social media are (by design), and yet they they are also purveyors of wrong information.

What could possibly go wrong, eh?

Not long ago, I saw on (Australian) 60 Minutes about people who are willingly having romantic relationships with AI.

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.