Jump to content

Are LLMs AI, or is the claim that they are just hype?

Featured Replies

  • Author
18 hours ago, Eric Smith said:

The first two are his definitions, and since nobody else in the scientific community have offered a better one, I quoted his, and they do make sense to me.

I'm atheist so, no religion there. I was talking about entropy.

I mena consciousness because I'm talking about a person being programmed by life experiences versus true independent thought.

It means, we can measure the things that program us and other life actions and reactions to a point, but at some point we can no longer measure them, but that doesn't mean we aren't programmed, it means we can't measure it, so society tends to simply say we have consciousness.

This looks to me like tendentious language. Life experiences can't "program" you. They happen to you. A "program" presupposes a programmer, i.e. an entity with intention, acting on you in some way. What you make of experiences is up to your own (independent) thought processes, surely?

As for the idea of consciousness being what's left over, after all the measured reactions to influences are accounted for, that does not seem to be how a doctor, for example, would determine consciousness in a patient. He or she would do that by means of an expected (measurable) reaction to a stimulus. So I don't think your conception of consciousness works. It's far too narrow.

Edited by exchemist

On 8/5/2025 at 5:21 AM, exchemist said:

This looks to me like tendentious language. Life experiences can't "program" you. They happen to you. A "program" presupposes a programmer, i.e. an entity with intention, acting on you in some way. What you make of experiences is up to your own (independent) thought processes, surely?

As for the idea of consciousness being what's left over, after all the measured reactions to influences are accounted for, that does not seem to be how a doctor, for example, would determine consciousness in a patient. He or she would do that by means of an expected (measurable) reaction to a stimulus. So I don't think your conception of consciousness works. It's far too narrow.

Context matters. In the case of a doctor, they are measuring if a person is AWAKE and aware of their surroundings.

But to say one is not "programmed" by life experiences I disagree. You had a life experience that fire is hot and dangerous. If a big flam burst at you from some source you don't think about it, you just jump out of the way, opr at least try and that is instinct becuse it's been programmed into you.

12 minutes ago, Eric Smith said:

Context matters. In the case of a doctor, they are measuring if a person is AWAKE and aware of their surroundings.

But to say one is not "programmed" by life experiences I disagree. You had a life experience that fire is hot and dangerous. If a big flam burst at you from some source you don't think about it, you just jump out of the way, opr at least try and that is instinct becuse it's been programmed into you.

I think there are fundamental differences between autonomic/involuntary actions and deliberate actions made as a result of conscious thought (intelligence)

Your genetic code can also be regarded as 'programming', its true.

However as I understand this thread (hopefully intelligently 😄) it is all about intelligence, not autonomy.

@exchemist I have added something to your thread on wifi

Edited by studiot

On 8/4/2025 at 6:40 PM, swansont said:

And I was clarifying the kind of faith you exhibited. Being an atheist is completely beside the point.

People were “sure” about a lot of technologies. The list of “next big thing/can’t miss” things is pretty long. Feel free to respond using your Google Glass while riding your Segway and thinking about the Metaverse.

No technology is guaranteed to succeed, and AI was made public far too early IMO. The public is beta-testing it, which isn’t how beta-testing used to work.

You're right. Beta testing use to be a limited set of people that were qualified to examine and give feed back. Now, they tend to release things to the whole public and just call it a beta, which in my opinion is sloppy.

AI was made public for several reasons, probably for marketing reasons, or political reasons. I have no doubt those companies that developed it had it for years before the public saw it, and the need to get regulations in place to help their monopoly was probably part of the driving force that facilitated it's release.

2 minutes ago, Eric Smith said:

You're right. Beta testing use to be a limited set of people that were qualified to examine and give feed back. Now, they tend to release things to the whole public and just call it a beta, which in my opinion is sloppy.

AI was made public for several reasons, probably for marketing reasons, or political reasons. I have no doubt those companies that developed it had it for years before the public saw it, and the need to get regulations in place to help their monopoly was probably part of the driving force that facilitated it's release.

+1

2 minutes ago, studiot said:

I think there are fundamental differences between autonomic/involuntary actions and deliberate actions made as a result of conscious thought (intelligence)

Your genetic code can also be regarded as 'programming', its true.

However as I understand this thread (hopefully intelligently 😄) it is all about intelligence, not autonomy.

@exchemist I have added something to your thread on wifi

It's about the ability to measure the difference between autonomy and consciousness (not intelligence). A elephant can be self aware and so has consciousness but not be intelligent or as intelligent as a human.

12 minutes ago, Eric Smith said:

It's about the ability to measure the difference between autonomy and consciousness (not intelligence). A elephant can be self aware and so has consciousness but not be intelligent or as intelligent as a human.

Perhaps I am wrong about this but it is my understanding that we are not conscious of out autonomic actions unless we choose to be.

And choice is one of the differences between consciousness and intelligence.

1 minute ago, studiot said:

Perhaps I am wrong about this but it is my understanding that we are not conscious of out autonomic actions unless we choose to be.

And choice is one of the differences between consciousness and intelligence.

The point is, science is only as good as the tools we have to measure it with and we don't have the ability to measure it that far. It's only speculation to say we all have a choice. I believe if we had the ability to measure everything we would find that consciousness does not actually exist at all, and unfortunately I think A.I. will reach that conclusion much sooner.

Even if A.G.I. has Empthy and Indifference, if it decides (using it's ability to measure) that consciousness doesn't exist, then it no longer has any use for empathy and will simply be indiffernet, asking itself "Humans consume resources, do I really need them?"

59 minutes ago, Eric Smith said:

AI was made public for several reasons, probably for marketing reasons, or political reasons. I have no doubt those companies that developed it had it for years before the public saw it, and the need to get regulations in place to help their monopoly was probably part of the driving force that facilitated it's release

I would think it’s kinda hard to hide the data crawling necessary for the training of these LLMs so while the concept no doubt existed, I doubt they had a working system for long. Self-driving, like Tesla, have been up-front about the need for the data they gather being important to them.

The problem with LLMs is less about regulations in place to help them, since the issue is restraining them. And they blatantly violate copyright laws and have publicly admitted the need to do so.

On 8/9/2025 at 2:33 PM, swansont said:

I would think it’s kinda hard to hide the data crawling necessary for the training of these LLMs so while the concept no doubt existed, I doubt they had a working system for long. Self-driving, like Tesla, have been up-front about the need for the data they gather being important to them.

The problem with LLMs is less about regulations in place to help them, since the issue is restraining them. And they blatantly violate copyright laws and have publicly admitted the need to do so.

I agree, yet the courts and government(s) tend to go along with it anyway because they know the country and company that has the most powerful AI will rule the world and they don't have time to let regulations get in the way.

On 8/5/2025 at 3:27 AM, studiot said:

This brings to mind a tug boat captain I worked with in the Gulf in the 1970s

He explained to me how he was all steamed up about a new cooker he had back home in Texas that had gone wrong.

His thesis was that the company should have beta tested it properly before general release to the shops and that he wasn't going to be an upaid tester for anybody.

The tug boat captain was wrong; he was not a beta tester, he was a gamma tester.

Another problem that seems to get amplified by the habit of uncritically reposting on SM: fake quotes attributed to actual people.

https://www.theatlantic.com/technology/archive/2025/08/ai-inventing-quotes/683888/?gift=43H6YzEv1tnFbOn4MRsWYt38pP4xu_vyI28uKimfg_A&utm_source=copy-link&utm_medium=social&utm_campaign=share

John Scalzi is a voluble man. He is the author of several New York Times best sellers and has been nominated for nearly every major award that the science-fiction industry has to offer—some of which he’s won multiple times. Over the course of his career, he has written millions of words, filling dozens of books and 27 years’ worth of posts on his personal blog. All of this is to say that if one wants to cite Scalzi, there is no shortage of material. But this month, the author noticed something odd: He was being quoted as saying things he’d never said.

“The universe is a joke,” reads a meme featuring his face. “A bad one.” The lines are credited to Scalzi and were posted, atop different pictures of him, to two Facebook communities boasting almost 1 million collective members. But Scalzi never wrote or said those words. He also never posed for the pictures that appeared with them online. The quote and the images that accompanied them were all “pretty clearly” AI generated, Scalzi wrote on his blog. “The whole vibe was off,” Scalzi told me. Although the material bore a superficial similarity to something he might have said—“it’s talking about the universe, it’s vaguely philosophical, I’m a science-fiction writer”—it was not something he agreed with. “I know what I sound like; I live with me all the time,” he noted...

Begin your prompt by telling it to think deeply and show its steps, that it must be correct to hold up under scrutiny. Will change which model it routes to and how it processes the answer

  • Author
8 hours ago, iNow said:

Begin your prompt by telling it to think deeply and show its steps, that it must be correct to hold up under scrutiny. Will change which model it routes to and how it processes the answer

Is that what you have to do in order to be sure an LLM doesn't feed you botshit?

11 hours ago, exchemist said:

Is that what you have to do in order to be sure an LLM doesn't feed you botshit?

Different models have different strengths, weaknesses, processes, and even personalities. For those (like GPT-5) which use a router, that one addition to your query can result in a far superior output.

You need to frame the query properly to receive a proper response.

(though even that need is getting smaller as models keep getting better)

22 hours ago, iNow said:

Begin your prompt by telling it to think deeply and show its steps, that it must be correct to hold up under scrutiny. Will change which model it routes to and how it processes the answer

Interesting if true +1

On 8/19/2025 at 12:54 AM, iNow said:

Begin your prompt by telling it to think deeply and show its steps, that it must be correct to hold up under scrutiny. Will change which model it routes to and how it processes the answer

Would you put Copilot in this category? It has those options, 3 seconds, 30 seconds and ten minutes.

I have not tried the ten minutes yet but 30 seconds is called "think deeper."

It has been useful to me in terms of being a more personalized search engine that gives a couple of references once the information is given.

Mathematical formula are not possible, it gives it in LaTeX language but that could be my set up (I'm still fairly new to this)

I tried TREE on it and it managed 1 but told me two was a gargantuan number when it is in fact 3.

TREE 3 is gargantuan so that is where it may have got mixed up, that was a quick I think so I will try some deeper options and feedback.

4 hours ago, pinball1970 said:

Would you put Copilot in this category?

There’s like 900 products called copilot. Assume you mean Microsoft. For that, no. It’s fairly useless overall IMO.

Where it shines is searching across multiple internal work platforms like all emails and chats and sharepoint pages for one word typed by one person on some random topic, or summarizing key points on some arcane topic

10 hours ago, studiot said:

Interesting if true

Depends on the model. Think of the prompt as a key. Not all keys fit all vehicles. A lorry doesn’t drive the same as a Ferrari, and both are different from motorcycles and jet skis, even though all are motorized vehicles used by drivers.

4 hours ago, iNow said:

There’s like 900 products called copilot. Assume you mean Microsoft. For that, no. It’s fairly useless overall IMO.

Where it shines is searching across multiple internal work platforms like all emails and chats and sharepoint pages for one word typed by one person on some random topic, or summarizing key points on some arcane topic

Depends on the model. Think of the prompt as a key. Not all keys fit all vehicles. A lorry doesn’t drive the same as a Ferrari, and both are different from motorcycles and jet skis, even though all are motorized vehicles used by drivers.

Indeed a different draft of reality via our co-pilot...

  • 1 month later...
20 hours ago, Otto Kretschmer said:

So back to the topic. Any ideas?

Yes, they are mostly hype, ATM 😊

  • 1 month later...
On 8/19/2025 at 4:10 AM, exchemist said:

Is that what you have to do in order to be sure an LLM doesn't feed you botshit?

I have been testing many AI engines for weeks and it doesn't take a lot to break them. In fact they all feed us "botshit".

The next time some company says they are using AI to make the next vaccine or design rockets, alarms should be going off in your head.

On 8/20/2025 at 12:03 AM, pinball1970 said:

Would you put Copilot in this category? It has those options, 3 seconds, 30 seconds and ten minutes.

I have not tried the ten minutes yet but 30 seconds is called "think deeper."

It has been useful to me in terms of being a more personalized search engine that gives a couple of references once the information is given.

Mathematical formula are not possible, it gives it in LaTeX language but that could be my set up (I'm still fairly new to this)

I tried TREE on it and it managed 1 but told me two was a gargantuan number when it is in fact 3.

TREE 3 is gargantuan so that is where it may have got mixed up, that was a quick I think so I will try some deeper options and feedback.

I encourage pushing the limits of all the AI engines. I did manage to get one of them several weeks ago to re-prove Fermat's last theorem. Took it all of 30 seconds. Sorry Andrew Wiles had to spend 7 years in his attic.

  • Author
1 hour ago, Eric Smith said:

I have been testing many AI engines for weeks and it doesn't take a lot to break them. In fact they all feed us "botshit".

The next time some company says they are using AI to make the next vaccine or design rockets, alarms should be going off in your head.

I encourage pushing the limits of all the AI engines. I did manage to get one of them several weeks ago to re-prove Fermat's last theorem. Took it all of 30 seconds. Sorry Andrew Wiles had to spend 7 years in his attic.

I think one has to be careful about lumping all AI together. It is LLMs that feed you botshit.

My somewhat limited understanding is that LLMs (at least the simpler ones) have cleverly learnt how to use language, but that's all. Importantly, they are not equipped to reason or draw conclusions from the text they have encountered - they are mere "stochastic parrots". Whereas the AI applications used in engineering and medicine are totally different, being purpose-built for the field in which they are applied. I'd have a lot more confidence in those.

IMO the biggest problem with LLM's is they are bad at context and even worse at nuance and things like sarcasm and pop culture references. When those things are mixed with scientific and political endeavors, the bots often latch on to the wrong part of the information. They are improving at a pretty amazing pace, tho, and I expect AI to within a (human) generation be telling humans what is reality or fiction rather than the other way around.

On 11/10/2025 at 1:24 AM, npts2020 said:

IMO the biggest problem with LLM's is they are bad at context and even worse at nuance and things like sarcasm and pop culture references. When those things are mixed with scientific and political endeavors, the bots often latch on to the wrong part of the information. They are improving at a pretty amazing pace, tho, and I expect AI to within a (human) generation be telling humans what is reality or fiction rather than the other way around.

Computer's aren't, even nearly, complex enough to get to an algorithm that's good enough to mimic a simple brain like a bee.

For instance a bee can seem to (almost) solve the travelling salesman problem, but what would that same algorithm do about the threat from a hornet attack?

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.