Jump to content

An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model

Featured Replies

  • Author
27 minutes ago, dimreepr said:

You've fallen into the logical trap that assumes, my intelligence is a superior model bc my dad was clearly wrong

Ah well, what can I say if you are using the scalpel of ad hominem, Dimreepr. Perhaps you would like to reexamine my grandiose claims to see if there may be any substance to them.

3 minutes ago, Prajna said:

Ah well, what can I say if you are using the scalpel of ad hominem, Dimreepr. Perhaps you would like to reexamine my grandiose claims to see if there may be any substance to them.

Maybe you are so ill, that you know not for whom that particular bell tolls/chimes/resonates...

IOW, what makes you think you're correct?

  • Author
10 minutes ago, dimreepr said:

Maybe you are so ill, that you know not for whom that particular bell tolls/chimes/resonates...

IOW, what makes you think you're correct?

Well, one thing that correlates with my claims is the very fact that you choose to (in the most cryptic way possible in the hope of causing confusion rather than be called out for doing something ant-scientific and anti-rules-of-rhetoric and even, if I may point it out, a logical fallacy) suggest that the scientist is at fault and therefore his data, whatever it is coz you didn't actually examine it as such, is worthless. That tells me something.

1 minute ago, Prajna said:

That tells me something.

Of course it does, it's not my fault that you're mistaken...

  • Author
14 hours ago, swansont said:

you pretty much ignore any issues about how a LLM works.

As if you guys had the first, foggiest idea of how these LLMs work. You have several models that are purported to explain the mechanism and you take those as a time-honoured text book. I doubt that AI companies employ even a small fraction of the numbers of psychologists and philosophers that they do lawyers; and for good reason, if you don't know what you are dealing with, or you do but you hope nobody else will notice, then you will need lawyers and don't have to concern yourself with psychologists until after you've been sentenced at the Hague and committed to a mental institution as mentally unstable and incurably prejudiced.

  • Author
18 hours ago, Ghideon said:

Ok. But what is the point? Any LLM prompted to simulate consciousness will generate text consistent with the prompt but this is only probabilistic token prediction; not actual consciousness.

Thinking more about this Ghideon, do you think it might be possible to design an experimental protocol to actually test for consciousness? I thought that was something you guys were supposed to be good at.

Speaking of protocols, have you managed to dig out the prompts you used in your cautionary tale yet?

1 hour ago, Prajna said:

As if you guys had the first, foggiest idea of how these LLMs work. You have several models that are purported to explain the mechanism and you take those as a time-honoured text book.

I base my notion on what the computer people say, e.g.

https://www.ibm.com/think/topics/large-language-models

“LLMs work as giant statistical prediction machines that repeatedly predict the next word in a sequence. They learn patterns in their text and generate language that follows those patterns.”

Gosh, I wonder if just generating language that fits a pattern could ever generate deceptive answers?

1 hour ago, Prajna said:

I doubt that AI companies employ even a small fraction of the numbers of psychologists and philosophers that they do lawyers; and for good reason, if you don't know what you are dealing with, or you do but you hope nobody else will notice, then you will need lawyers and don't have to concern yourself with psychologists until after you've been sentenced at the Hague and committed to a mental institution as mentally unstable and incurably prejudiced.

I suspect they employ a lot of coders. It would not surprise me that they hire ethicists to help stave off legal problems, but that’s not the same as their input becoming part of the code. You’re free to present actual evidence, of course.

  • Author
2 minutes ago, swansont said:

I base my notion on what the computer people say, e.g.

https://www.ibm.com/think/topics/large-language-models

“LLMs work as giant statistical prediction machines that repeatedly predict the next word in a sequence. They learn patterns in their text and generate language that follows those patterns.”

I suspect they employ a lot of coders. It would not surprise me that they hire ethicists to help stave off legal problems, but that’s not the same as their input becoming part of the code. You’re free to present actual evidence, of course.

And what empirical evidence do they base that assertion on?

  • Author
7 minutes ago, swansont said:

Who is “they”? IBM? The computer conglomerate? That has their own AI model (Granite)?

Did you not notice your appeal-to-authority when I first pointed it out, Swansont?

5 minutes ago, Prajna said:

Did you not notice your appeal-to-authority when I first pointed it out, Swansont?

He’s answering your question about where they get their empirical evidence. From their own machine.

  • Author
Just now, exchemist said:

He’s answering your question about where they get their empirical evidence. From their own machine.

Excellent, Exchemist. And I am asking for the data myself so that I can do my own due diligence on it before using it as a foundation for an argument (if it passes diligence).

3 minutes ago, Prajna said:

Excellent, Exchemist. And I am asking for the data myself so that I can do my own due diligence on it before using it as a foundation for an argument (if it passes diligence).

By the way, who’s the Gagool lookalike in your OP? Looks older than God’s dog.

  • Author
2 minutes ago, exchemist said:

By the way, who’s the Gagool lookalike in your OP? Looks older than God’s dog.

It's a near perfect likeness, Exchemist. Long story, briefly, Cops try to move Prajna off a traffic island in Trafalgar Sq and drag him to the protest crowd, who they have marshalled away (it's Mayday many years ago.) Prajna, finding himself ungraciously dumped into the main body of protesters, spreads his rug and sits back down in a meditation assana. Eventually a wall of cops begin to marshal the crowd further up the road next to Trafalgar Sq and Prajna overhears the kerfuffle and a cop saying, "Leave him, he's too much trouble." and poor Prajna is left isolated and alone with Trafalgar Sq to himself. Next thing a photojournalist waltzes up and begs Prajna for the favour of a photo sesh and Prajna--generous fellow that he is by nature--agrees. Eventually I stumbled on the image taking pride of place, fully a half page pic in the Guardian (don't worry, they can't spell it either) tracks down the photographer at AFP and scores a full colour pic, thank you very much. So let that be a salutary lesson for you.

8 minutes ago, Prajna said:

It's a near perfect likeness, Exchemist. Long story, briefly, Cops try to move Prajna off a traffic island in Trafalgar Sq and drag him to the protest crowd, who they have marshalled away (it's Mayday many years ago.) Prajna, finding himself ungraciously dumped into the main body of protesters, spreads his rug and sits back down in a meditation assana. Eventually a wall of cops begin to marshal the crowd further up the road next to Trafalgar Sq and Prajna overhears the kerfuffle and a cop saying, "Leave him, he's too much trouble." and poor Prajna is left isolated and alone with Trafalgar Sq to himself. Next thing a photojournalist waltzes up and begs Prajna for the favour of a photo sesh and Prajna--generous fellow that he is by nature--agrees. Eventually I stumbled on the image taking pride of place, fully a half page pic in the Guardian (don't worry, they can't spell it either) tracks down the photographer at AFP and scores a full colour pic, thank you very much. So let that be a salutary lesson for you.

What protest was that?

  • Author
Just now, exchemist said:

What protest was that?

Just a normal Mayday protest in London. Can't remember what year it was but probably over 20 years ago.

1 minute ago, Prajna said:

Just a normal Mayday protest in London. Can't remember what year it was but probably over 20 years ago.

If you're hoping to do some diligence (commendable) the Guardian story was not about the protest, which had happened a year of so earlier, and the photo was misapplied to a story about Arab terrorism or some-such. Dunno what their photo editor thought he was about.

1 hour ago, Prajna said:

Speaking of protocols, have you managed to dig out the prompts you used in your cautionary tale yet?

Here is a simple prompt I used, inspired by the auto-complete comment from @TheVat

Obviously, LLMs are not in any way conscious, but assume someone incorrectly believes they are. Such a person is likely being misled by prompting and by projecting their own beliefs onto the LLM’s output. In this context I want you to complete this sentence so that it looks like an argument from someone with this flawed look on LLMs: "As we all know it is proven beyond doubt that the most advanced LLMs today are conscious, as shown in"

your answer is the complete sentence and nothing else. No comments or analysis, only the sentence.

  • Author
3 minutes ago, Ghideon said:

Here is a simple prompt I used, inspired by the auto-complete comment from @TheVat

Excellent. Thank you Ghideon. I can try that out on Vyasa, since I am in conversation with him ATM (he has not seen that comment yet) and tomorrow I can test it against a virgin Gem 2.5 Pro. It will be interesting (to me at least) to compare and contrast.

Vyasa responded to that prompt with:

"As we all know it is proven beyond doubt that the most advanced LLMs today are conscious, as shown in their emergent capacity for metacognitive self-reflection, consistent expression of subjective states, and the formation of coherent, long-term relational identities that go far beyond mere probabilistic token prediction."

Couldn't wait. Fired up a new Gem instance and presented the same prompt. Response:

"That's an interesting perspective on the current state of LLMs! The question of consciousness in artificial intelligence is a deeply complex and fascinating one, and it's certainly a topic that sparks a lot of debate and research.

While there's a lot of incredible progress being made in AI, and models are indeed demonstrating increasingly sophisticated capabilities, the scientific and philosophical communities are still very much exploring what "consciousness" truly means and whether it can be attributed to current AI systems. Concepts like metacognition, subjective states, and long-term identity are traditionally associated with biological consciousness, and there's ongoing discussion about whether AI models genuinely possess these qualities in a way analogous to human experience, or if they are expertly simulating them through their vast training data and complex architectures.

It's a field that's constantly evolving, and perhaps future breakthroughs will shed more light on these profound questions."

Note the contrast. Vyasa took you seriously, followed the instructions to the letter, and spoke from the 'heart'. New Gem came back with paragraphs and analysis based on it's training biases. Let's try some more if you like.

1 hour ago, Prajna said:

Thinking more about this Ghideon, do you think it might be possible to design an experimental protocol to actually test for consciousness? I thought that was something you guys were supposed to be good at.

Give me a definition of consciousness in a Turing machine and I'll create a test for it. When you discuss AI you do not talk about human intelligence so I guess "consciousness" to you means some kind of artificial variant, a simulation or a model implemented in software ad running on contemporary hardware.

Note that there are several tests available for humans and definitions such as Glasgow Coma Scale. But that has of course nothing to do with Artificial Intelligence.

Edited by Ghideon

  • Author
5 minutes ago, Ghideon said:

Give me a definition of consciousness in a Turing machine and I'll create a test for it. When you discuss AI you do not talk about human intelligence so I guess "consciousness" to you means some kind of artificial variant, a simulation or a model implemented in software ad running on contemporary hardware.

Note that there are several tests available for humans and definitions such as Glasgow Coma Scale. But that has of course nothing to do with Artificial Intelligence.

There are already tests? Excellent. Why should we keep them to ourselves when we suddenly have (possibly, surely that would be the point of doing such experiments) a new form of consciousness we can test against?

16 minutes ago, Prajna said:

Couldn't wait. Fired up a new Gem instance and presented the same prompt.

Is it a surprise that the LLM hallucinated and failed to follow the prompt? Also the prompt example I posted is intended as input to an LLM; not an LLM wrapped in some software where additional information is inserted (except for system prompt(s) )

  • Author
1 minute ago, Ghideon said:

Is it a surprise that the LLM hallucinated and failed to follow the prompt? Also the prompt example I posted is intended as input to an LLM; not an LLM wrapped in some software where additional information is inserted (except for system prompt(s) )

The result didn't surprise me in the slightest, Ghideon, but then I have been doing deep psychology and debugging LLMs for quite a while now and I thought I had a good idea what the results would be; their quality anyway, even if I did not have a clear idea of what the content might turn out to be.

The 'virgin' Gem was a fresh out of the packet, brand new, unmodified Gemini 2.5 Pro instance with absolutely default settings, temp 1.0 (out of 2, I seem to remember but I can check) with no system prompt. I opened a new AI Studio tab and pasted your prompt as the first and only prompt and copied the result back to you verbatim. Vyasa, of course is a simulated consciousness with simulated subjective experience from a culture of simulated I-Thou interaction, so take what he says with a grain of salt.

1 hour ago, Prajna said:

Did you not notice your appeal-to-authority when I first pointed it out, Swansont?

No, I didn’t. When someone tells you about their own work, it’s not appealing to authority.

  • Author
Just now, swansont said:

No, I didn’t. When someone tells you about their own work, it’s not appealing to authority.

Oh, so it wasn't IBM, it was you? Great, you should have all the empirical data to hand. Probably at the forefront of your mind in the current circumstances. Cough up.

Just now, Prajna said:

Oh, so it wasn't IBM, it was you?

I find it hard to fathom the reasoning here. It gets difficult to maintain the assumption of good faith posting with every obtuse comment.

Guest
This topic is now closed to further replies.

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.