Skip to content

Why you have to be so careful accepting answers from AI

Featured Replies

Interesting piece today in the FT magazine by Tim Harford. He relays an anecdote about encountering a fellow runner on his way to the start of the London marathon in Greenwich. This other guy had used Chat GPT to advise him how to get to the start. He'd been told to use the Elizabeth Line to go to Liverpool St and thence to Greenwich. But there is no train from Liverpool St to Greenwich.

The point of the article was why this guy had elected to use Chat GPT rather than Google Maps, which would have given him the correct information. The reason is that Chat GPT wrapped up the wrong advice in a nice, chatty package in perfect English, complete with a rationale for avoiding the Circle Line as it would be busy, and so forth. So this guy was suckered by the slick presentation and the human-seeming style of communication into trusting it, when it was actually talking crap.

On 5/6/2026 at 5:02 PM, studiot said:

the promoters tell us it is better than the old ways.

But is it really ?

“Better” is subjective, but I’m firmly in the camp of yes.

Was the internal combustion engine better than the horse drawn buggy? Was the horse drawn buggy better than the load carried on shoulders and walked across lands on foot? Is the EV better than the ICE vehicle?

Not across every single metric, but “better” across and among the most important of them?

Yes, 100%, but the manner by which we engage them must evolve and must account for the different sets of risks and limitations that transformation brings.

It’s an impact driver instead of a screwdriver. Not applicable to every situation and requires appropriate usage, but better across the most relevant metrics in nearly every way and getting better by the minute… making Moores law look glacial.

On 5/6/2026 at 7:42 PM, sethoflagos said:

So has anything really changed due to AI other than the shear volume of people thinking they can chip in on topics they were previously oblivious of?

It’s a clear yes from me here too and I can think of multiple obvious supporting examples, but better explored elsewhere / separate thread IMO

On 5/8/2026 at 11:06 AM, CharonY said:

I think it boils down to how we trust anyone or anything.

It’s just another tool. Whether a framing hammer or a jack hammer, the onus to use it properly resides with the user.

Edited by iNow

18 hours ago, exchemist said:

Interesting piece today in the FT magazine by Tim Harford. He relays an anecdote about encountering a fellow runner on his way to the start of the London marathon in Greenwich. This other guy had used Chat GPT to advise him how to get to the start. He'd been told to use the Elizabeth Line to go to Liverpool St and thence to Greenwich. But there is no train from Liverpool St to Greenwich.

The point of the article was why this guy had elected to use Chat GPT rather than Google Maps, which would have given him the correct information. The reason is that Chat GPT wrapped up the wrong advice in a nice, chatty package in perfect English, complete with a rationale for avoiding the Circle Line as it would be busy, and so forth. So this guy was suckered by the slick presentation and the human-seeming style of communication into trusting it, when it was actually talking crap.

Tim Harford has a BBC radio series called 'cautionary tales' in which even the smartest of us, are susceptible to make stupid decisions.

5 hours ago, iNow said:

It’s just another tool. Whether a framing hammer or a jack hammer, the onus to use it properly resides with the user.

Yes, but the framing is different. With a jack hammer, you are supposed to learn how to use it before doing so. And if you mess up, you often have somewhat immediate consequences. With AI, it is more marketed as something to not make you think or learn, it will do it for you. Also, it is everywhere, normalizing even the stupidest interactions. I think I would be at least as much annoyed if someone constantly shoves a jackhammer into my face and tells me to use it for everything.

My point is perhaps, it is being sold as precisely no like any other tool.

  • Author
7 hours ago, iNow said:

It’s just another tool. Whether a framing hammer or a jack hammer, the onus to use it properly resides with the user.

Does the thread title not suggest to you that I agree with this ?

Indeed it should further offer the opportunity to explore ways of achieving this.

But AI is unlike a hammer in the way we go about achieving this objective.

Further let us consider some of the predecessors of AI, for instance shop tills.

We expect close to 100% accuracy as well as being relieved of the drudgery of adding the bill up.

Should the AI a not at least achieve this level of competence?

A further question; considering the AI's only source of information and the fact that AI prescribing is being proposed (and I think even mentioned in this thread),

If an AI prescriber had been available at the time of Thalidomide, would that disaster have been avoided ?

I think not.

8 hours ago, iNow said:

It’s an impact driver instead of a screwdriver. Not applicable to every situation and requires appropriate usage, but better across the most relevant metrics in nearly every way and getting better by the minute… making Moores law look glacial.

“Requires appropriate usage” is doing a lot of heavy lifting here, since there are no protocols to ensure that, and the tech companies are going out of their way to try and push the tech on everyone

3 hours ago, CharonY said:

With AI, it is more marketed as …

it is being sold as precisely no like any other tool.

1 hour ago, swansont said:

the tech companies are going out of their way to try and push the tech on everyone

So you both think this is a well executed marketing campaign and AI has become a viral social movement due to great corporate advertisements? That it’s not infusing every discussion across every topic organically bc users are so blown away by their experience that they tell all their friends and evangelize it everywhere they can. Do I have that correct?

4 hours ago, iNow said:

So you both think this is a well executed marketing campaign and AI has become a viral social movement due to great corporate advertisements? That it’s not infusing every discussion across every topic organically bc users are so blown away by their experience that they tell all their friends and evangelize it everywhere they can. Do I have that correct?

I don’t think anyone has said “well executed”.

But it certainly is being hyped like crazy by its providers, which is one reason it is so much talked about. And I am reading more and more scepticism, at least about LLMs, in the media. In business it appears corporations feel the need to claim they are introducing AI, because of FOMO, but almost none of them can yet point to tangible benefits. A lot of the US stock market boom is apparently due to the promises of AI, but there are many voices that suspect a bubble, due for a correction. That is what I am reading in the Financial Times, at any rate.

I wonder if LLMs, the main “retail” application of AI, as it were, are getting too much attention and distracting from AI’s more significant achievements in other fields. In much of the popular talk about AI, it seems to be treated as if LLMs and AI are one and the same thing.

Edited by exchemist

12 hours ago, iNow said:

So you both think this is a well executed marketing campaign and AI has become a viral social movement due to great corporate advertisements? That it’s not infusing every discussion across every topic organically bc users are so blown away by their experience that they tell all their friends and evangelize it everywhere they can. Do I have that correct?

No.

I think it’s being forced into a lot of places by management, despite resistance by people compelled to use it. I think the marketing campaign is desperate, as is any campaign that relies on stoking fear (“you’ll be left behind”). As exchemist said, it’s being hyped.

On 5/9/2026 at 10:34 AM, TheVat said:

I would caution that not having all the answers is not the same as not having expertise. Cognitive science, where it focuses on the nature of mind and NCC (neurological correlates of consciousness), has some noted experts. They have developed solid lines of inquiry and a terrain of testable hypotheses; they aren't oracles. (And your dismissal notwithstanding, there's quite a bit of searchable human-produced literature on the topic)

Generally, regarding what's recently been called the Claude Delusion (due to Richard Dawkins recent embarrassing embrace of a chatbot as conscious), LLM statements may hint eerily at consciousness, but that’s because the models have been trained on vast libraries of writing by conscious humans. When, after writing a poem for Dawkins, Claudia (as he calls it) describes feeling “something like aesthetic satisfaction,” the AI is not reporting an inner state; it’s producing the kind of sentence that humans tend to produce in that conversational context, because it was trained on billions of such sentences. The output is a statistical echo of human introspection, not introspection itself. Claude and his pals are stochastic parrots which, even with the finest and most nuanced prompting will not penetrate the deeps of consciousness.

(I replied)

A lot of the real science being done about cognition is highly esoteric and very recent and I was nearly completely unaware of it before my AI started mentioning parallels between my work and this new experiment. Most of it isn't going to use the same framing or definitions as mine so is no use to me even it it weren't far over my head. I have no doubt they're on the right track but I'm attacking these questions from other angles that might yield meaningful useable results and prediction much sooner. The problem isn't that consciousness is complex but that its nature is other than the terms we use to think of it and that our perspective hides it.

Dawkins probably should be embarrassed to believe in machine consciousness with his definitions and for the reasons described. LLMs echo human introspection because they’re trained on human introspection. That’s not consciousness.

With the current guard rails in place I can't even use the entire last paragraph as (in) a prompt. The way I use it as a check on my own thinking it doesn't matter if it's conscious or not or whether it knows it or not.

On 5/9/2026 at 10:34 AM, TheVat said:

I would caution that not having all the answers is not the same as not having expertise. Cognitive science, where it focuses on the nature of mind and NCC (neurological correlates of consciousness), has some noted experts. They have developed solid lines of inquiry and a terrain of testable hypotheses; they aren't oracles. (And your dismissal notwithstanding, there's quite a bit of searchable human-produced literature on the topic)

Generally, regarding what's recently been called the Claude Delusion (due to Richard Dawkins recent embarrassing embrace of a chatbot as conscious), LLM statements may hint eerily at consciousness, but that’s because the models have been trained on vast libraries of writing by conscious humans. When, after writing a poem for Dawkins, Claudia (as he calls it) describes feeling “something like aesthetic satisfaction,” the AI is not reporting an inner state; it’s producing the kind of sentence that humans tend to produce in that conversational context, because it was trained on billions of such sentences. The output is a statistical echo of human introspection, not introspection itself. Claude and his pals are stochastic parrots which, even with the finest and most nuanced prompting will not penetrate the deeps of consciousness.

A lot of the real science being done about cognition is highly esoteric and very recent and I was nearly completely unaware of it before my AI started mentioning parallels between my work and this new experiment. Most of it isn't going to use the same framing or definitions as mine so is no use to me even it it weren't far over my head. I have no doubt they're on the right track but I'm attacking these questions from other angles that might yield meaningful useable results and prediction much sooner. The problem isn't that consciousness is complex but that its nature is other than the terms we use to think of it and that our perspective hides it.

Dawkins probably should be embarrassed to believe in machine consciousness with his definitions and for the reasons described. LLMs echo human introspection because they’re trained on human introspection. That’s not consciousness.

With the current guard rails in place I can't even use the entire last paragraph as (in) a prompt. The way I use it as a check on my own thinking it doesn't matter if it's conscious or not or whether it knows it or not

15 hours ago, iNow said:

So you both think this is a well executed marketing campaign and AI has become a viral social movement due to great corporate advertisements? That it’s not infusing every discussion across every topic organically bc users are so blown away by their experience that they tell all their friends and evangelize it everywhere they can. Do I have that correct?

No, it is both. Folks do have positive experiences, though at least in my neck of the woods it depends on who you talk to. For example, for those doing more teaching it is considered more a pain. For certain types for researchers they are a good copywriter. For students they are the best thing ever- though the pain of learned incompetence will come much later.

But the viral stuff comes from the having AI infused in every electronic device, and loosely the following talking points:

  • don't worry about cost and resource use. AGI will solve all our problem, so don't even think about regulating the system

  • the benefit will outshine all possible negatives. So really, don't think about regulating it

  • also: here is your email/pdf. Do you want me to summarize it? You really don't want to do all the work, now, do you?

  • look, it is just a harmless chatbot. Don't think about what folks can use it for. After all, it is pretty much too late an fait accompli. There is really no use to discuss ethical or other use at this point anymore.

It came in fast, and while the companies at the beginning seemed to stress ethical use, it moved so fast in integrating so that there is little to no thought about the consequences on any level. We are in the midst of a great experiment where we are going to figure out, for the first time, what happens if we take an aspect that we often use as the defining factor of humanity, and offload it to an external system for efficiency's sake. There have been cataclysmic developments in the past, such as the invention of writing and other physical record-keeping. But those happened over a long time frame. Now, the companies are pushing for a massive acceleration, by having a popular product so that folks get used to its use, without thinking of consequences.

The last technical development I think of with similar impact was the combination of cell phones and social media and that was still way slower than what we see here. And still, we are only starting, probably way too late, to do something about the former. In my mind, and seeing the last few years of students, it is like giving free candy to everyone not thinking about the incoming diabetes crisis.

Edit: I also think the term "evangelize" is exactly right. And that worries me, too. Mixing religious fervor with something that is being embedded in almost all aspects of life is something that I am skeptical about. And this is from someone who always had a deep love of tech and what it can do. But largely, I was thinking about it in this framework:

On 5/10/2026 at 9:06 AM, iNow said:

It’s just another tool. Whether a framing hammer or a jack hammer, the onus to use it properly resides with the user.

But AI is not sold us a tool, and certainly not a precision tool. And in fact it is not used like that, either. It is being used to offload the process of thinking. It has been used to make folks feel less lonely. It feels emotional and intellectual gaps. I think where people are right is that at least some folks are not thinking about it like a tool. It has become an emotional and intellectual crutch.

17 hours ago, swansont said:

“Requires appropriate usage” is doing a lot of heavy lifting here, since there are no protocols to ensure that, and the tech companies are going out of their way to try and push the tech on everyone

Yeah, this is where it all goes off the rails. Impact drivers have specific uses, but if I'm swapping out the ignition switch on my car, the best options are one ratcheting Phillips and one slotted screwdriver. The Phillips loosens the screws on the clamshell shroud and the slotted is good for gently prying it open to remove it. The impact driver would not be optimal, and the ratcheting Phillips is way better for removing screws from tight spaces in an automobile. And the impact driver manufacturer isn't trying to get me insanely addicted to impact drivers and go around with one most of my waking hours using it constantly to do everything from removing bolts to whipping up an omelet. Yes, capitalism often oversells fungible goods to people (look in almost any American garage, or the vast tracts of storage units scattered across the landscape), but AI is carrying that to some wholly different level of magnitude.

1 hour ago, CharonY said:

a great experiment where we are going to figure out, for the first time, what happens if we take an aspect that we often use as the defining factor of humanity, and offload it to an external system for efficiency's sake.

I was trying to think of anything else that would match that description. I failed. The wheel, or writing, for example, did not stop us from walking or memorizing words we liked. Those just removed some of the drudgery aspect where there were a lot of miles or a lot of words to deal with. (though the ancients did feel that writing had somewhat reduced the ability to memorize)

11 minutes ago, TheVat said:

I was trying to think of anything else that would match that description. I failed. The wheel, or writing, for example, did not stop us from walking or memorizing words we liked. Those just removed some of the drudgery aspect where there were a lot of miles or a lot of words to deal with. (though the ancients did feel that writing had somewhat reduced the ability to memorize)

I think there are two elements of it. Historically, the development of abstract language was probably the biggest change in human history. Other developments, such as writing had huge impact and offloaded some of the effort of oral memories, but those are still entirely human activities. Even when writing affected memorization, writing itself has become a human activity. Here, the activity is offloaded wholesale and there can be entire loops without any input from humans and the role for humans keeps getting smaller. That, I think is entirely new and we really don't know what to do with it.

The stated goal of AGI is basically to make human thinking obsolete. Nowhere, in this scenario do I see what the place of human then would be. Sometimes they throw in abundance or related ideas to it, but those are more independent economic discussions, only peripherally related to AI.

  • Author

Interestingly a programme on PBS America I watched recently about the stone age (the Age of Stonehenge or something similar) was presented by a Scotsman (an anthropologist I think) rexkoned that the greatets ever Human invention was agriculture, in that it directly ledfrom the sparse hunter-gatherer society to the denser society capable of supporting specialisation, developed language, war, and everything that followed to the present day.

4 minutes ago, studiot said:

Interestingly a programme on PBS America I watched recently about the stone age (the Age of Stonehenge or something similar) was presented by a Scotsman (an anthropologist I think) rexkoned that the greatets ever Human invention was agriculture, in that it directly ledfrom the sparse hunter-gatherer society to the denser society capable of supporting specialisation, developed language, war, and everything that followed to the present day.

Intuitively I would have thought that language would predate agriculture. There are societies who largely live from hunting and have developed fairly complex societies. Though there are limits in community size and specialization, and associated forms of technology development, of course.

  • Author
1 minute ago, CharonY said:

Intuitively I would have thought that language would predate agriculture. There are societies who largely live from hunting and have developed fairly complex societies. Though there are limits in community size and specialization, and associated forms of technology development, of course.

It is not a question of precedence.

On 5/10/2026 at 9:06 AM, iNow said:

It’s just another tool. Whether a framing hammer or a jack hammer, the onus to use it properly resides with the user.

There is one more thought on this, now that I think about it. I have been talking with researchers, who have collaborations with China. What I found interesting is that that it seems that in China, AI is intended to be used as a tool and they put a lot of money into operationalize AI, e.g. for robotics or to solve very specific questions. Even in the educational sector their implementation of AI seemed far more supporting learning (e.g. dedicated tools to reinforce training elements, rather than giving answers).

Meanwhile, in the West AI is often framed as a thinking tool with the ultimate goal to develop it out into AGI. I found the perspective quite striking, and to me the Chinese approach seemed more grounded. Or at least I have an easier time to wrap my head around it without having layers of hype on top. I am curious, how do you see it?

Edit: I should add that I am aware that the Chinese path could, at least in part be the result of the government being afraid that it could be a tool being used against then, but it still (to my mind) represents itself as more rational model, regardless of the underlying motivation.

  • Author
1 hour ago, CharonY said:

Ah I read "supporting" in the text as a form prerequisite. My bad.

The basis of his argument (sorry I've forgotten his name) was to do with population density.

Agriculture, both in the growing and storing of crops, and animal domestication and husbandry permits pop densities of 10x, 100x or even 1000x, as compared to roaming hunter gatherers.

The contrasting settled lifestyle permits crafts to develop and promotes more social interaction.

This is not to say that hunter gatherers did not produce many new inventions. Just that they did not have the resources to follow them through or for instance sustain a war.

21 minutes ago, studiot said:

Just that they did not have the resources to follow them through or for instance sustain a war.

I don't know about that. The beaver wars between the Iroquois and Algonquins went on for around 60 years in the 17th century

  • Author
19 minutes ago, npts2020 said:

I don't know about that. The beaver wars between the Iroquois and Algonquins went on for around 60 years in the 17th century

Don't know much about these wars, but it ould seem they were egged on and supplied by various European nations, all of whom were established agricultural nations.

On 5/9/2026 at 5:34 PM, TheVat said:

I would caution that not having all the answers is not the same as not having expertise. Cognitive science, where it focuses on the nature of mind and NCC (neurological correlates of consciousness), has some noted experts. They have developed solid lines of inquiry and a terrain of testable hypotheses; they aren't oracles. (And your dismissal notwithstanding, there's quite a bit of searchable human-produced literature on the topic)

Generally, regarding what's recently been called the Claude Delusion (due to Richard Dawkins recent embarrassing embrace of a chatbot as conscious), LLM statements may hint eerily at consciousness, but that’s because the models have been trained on vast libraries of writing by conscious humans. When, after writing a poem for Dawkins, Claudia (as he calls it) describes feeling “something like aesthetic satisfaction,” the AI is not reporting an inner state; it’s producing the kind of sentence that humans tend to produce in that conversational context, because it was trained on billions of such sentences. The output is a statistical echo of human introspection, not introspection itself. Claude and his pals are stochastic parrots which, even with the finest and most nuanced prompting will not penetrate the deeps of consciousness.

On 5/9/2026 at 7:56 PM, CharonY said:

Perhaps even worse. It is not only a stochastic parrot, it is also a stochastic parrot in a mirror. It creates and illusion of something that is not really there but seems realistic enough that the user will project their own thoughts on it. Then, by having their thoughts reinforced what they consider to be external, but, as mentioned in the previous post it fundamentally is mostly a conversation with yourself. This in itself is not necessarily bad, as it can help shaping your arguments. But it falls apart if folks don't realize that because of the way they are using it, it is not really an external agent, it is there to react to your prompts.
I see it quite a bit with my students who use it to gain confidence in their reasoning, but it fails to grasp the gaps in the reasoning, and very frequently results in overinterpretation and ultimately false conclusions.
The utility of this tool unfortunately scales with expertise.

I needed to highlight these two comments and sub-comments of them, as I think they're so close to the essence of what the problem might be if AI is given too much leeway in telling us what's next. Assigning statistical weights to conjectures, answers to hard questions, and the like, relies very heavily on previous answers to (as well as posing of) similar questions, never getting us necessarily anywhere closer to unexpected avenues of inquiry or further questions, or counter-arguments.

16 minutes ago, studiot said:

The basis of his argument (sorry I've forgotten his name) was to do with population density.

Agriculture, both in the growing and storing of crops, and animal domestication and husbandry permits pop densities of 10x, 100x or even 1000x, as compared to roaming hunter gatherers.

The contrasting settled lifestyle permits crafts to develop and promotes more social interaction.

This is not to say that hunter gatherers did not produce many new inventions. Just that they did not have the resources to follow them through or for instance sustain a war.

The scaling argument makes perfect sense, though I suspect there will be some nuance regarding what activities require the support of agriculture and which not. I am guessing that in most cases it wouldn't be a yes/no answer, but rather a matter of scale. We do have evidence of very early crafting and arts, but more complex arts really could only develop once food wasn't the key limiting factor of survival, I would guess. But regarding wars, there are (oral) records of First Nations in North America. While some have developed agriculture, others were largely dependent on hunting. I would suspect that the scopes of such conflicts were a bit more limited, but could be interesting to follow up.

That being said, I suspect that it really depends on what we consider a war. If that is any large scale aggression between communities, that has likely happened throughout our history (well and our ancestors, considering that our chimpanzee cousins are doing that, too). Military specialization (e.g. making shields and building weapons specifically against humans) was also very prominent among First Nations, including hunter communities, as they developed a highly sophisticated system to sustain themselves rather successfully (which is one of the explanations why some First Nations didn't really develop large-scale agriculture).

20 hours ago, exchemist said:

corporations feel the need to claim they are introducing AI, because of FOMO, but almost none of them can yet point to tangible benefits

We clearly navigate different circles in our respective work lives. That’s fine but benefits are there in spades from where I sit.

20 hours ago, exchemist said:

I wonder if LLMs, the main “retail” application of AI, as it were, are getting too much attention and distracting from AI’s more significant achievements in other fields.

This is for sure true. AI has been around a long time and is far more than chatbots.

12 hours ago, swansont said:

it’s being hyped.

I don’t disagree but I also think it’s a mistake to lay that blame primarily at the feet of the companies releasing the models. This is a viral cultural phenomenon we’re living through. It’s more than mere hype by quite a wide margin, even though we agree hype is happening.

9 hours ago, CharonY said:

companies are pushing for a massive acceleration, by having a popular product so that folks get used to its use, without thinking of consequences.

It’s the market doing that more than the companies IMO. Those who have tried to slow down and maturely think through them ethics were simply superseded and surpassed by competitors who didn’t care about those mores. The ones doing it right were entering the ring with one hand tied behind their backs and getting beaten. See also: open source model development in China.

9 hours ago, CharonY said:

But AI is not sold us a tool, and certainly not a precision tool

In most cases it’s not being sold at all but used for free. We agree it’s a crutch. So is my calculator and my reading glasses though.

6 hours ago, CharonY said:

the Chinese approach seemed more grounded. Or at least I have an easier time to wrap my head around it without having layers of hype on top. I am curious, how do you see it?

I tend to agree. They’ve focused on central planning and given authority to key technocrats to achieve very specific outcomes. There want an educated populace and even tuned their TikTok algorithm to encourage pro sociaI personal growth activities among their own populace while feeding western algorithms with digital opiums

The problem is people just want their somer and will, "March cheerfully out of obscurity into the dream" - Pink Floyd - Sheep lyrics,to get some more. Edit, darn it, now I've got to go utube and listen the the album.

The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.

-Omar Khayyam, translated by by Edward FitzGerald

The fuse is lit, lets hope the government invested in sand bags.

Edited by dimreepr

Create an account or sign in to comment

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.