Jump to content

Featured Replies

My wife asked AI for the box office phone number of our local theatre.

The box office is open Monday to Friday 10.00 to 16.00 and on Saturdays if there is a performance.

As it is past 4 pm on Friday you can phone the box office on Saturday to see if they are open

πŸ˜„

41 minutes ago, studiot said:

My wife asked AI for the box office phone number of our local theatre.

πŸ˜„

Yeah. And that enquiry has used 10,000 times as much electricity as a simple web search engine, to come up with that useless crap.

1 hour ago, exchemist said:

Yeah. And that enquiry has used 10,000 times as much electricity as a simple web search engine, to come up with that useless crap.

Your statement is not backed up by any data and only shows the extent of your ignorance because you couldn't be bothered to even check the real data. You are only showing how biased and prejudiced you are.

The thing is, the @studiot uses Google Gemini for this. And you lump every LLM into the same basket and use them interchangeably. It's like lumping together the energy consumption and CO2 and NOx emissions of a combustion engine car, an electric car, and a truck, and not caring about the differences between them.

So let's see how it really looks:

gemini.png

search-engine.png

IOW, not 10,000, but twice that amount.

You were only wrong 5,000 times.

1 hour ago, exchemist said:

as a simple web search engine, to come up with that useless crap.

A search engine is not simple. It searches the entire world every few minutes. If you have a static website, it will visit once a day, and if you have a dynamic website, it must do so every few minutes. Googlebot visits can be seen in the HTTP server logs. You can have more visits from search engine bots than from real people.

There are several dozen such search engines.

I've told the @studiot more than once that Google Gemini is crap, but he keeps talking about it like a maniac.. let him start using ChatGPT/DeepSeek.. But not ChatGPT v5, because they messed it up.

ChatGPT does not require registration. Ask if it can play chess, and then play chess with this LLM.

You'll shit your pants..

Edited by Sensei

3 hours ago, Sensei said:

Your statement is not backed up by any data and only shows the extent of your ignorance because you couldn't be bothered to even check the real data. You are only showing how biased and prejudiced you are.

The thing is, the @studiot uses Google Gemini for this. And you lump every LLM into the same basket and use them interchangeably. It's like lumping together the energy consumption and CO2 and NOx emissions of a combustion engine car, an electric car, and a truck, and not caring about the differences between them.

So let's see how it really looks:

gemini.png

search-engine.png

IOW, not 10,000, but twice that amount.

You were only wrong 5,000 times.

A search engine is not simple. It searches the entire world every few minutes. If you have a static website, it will visit once a day, and if you have a dynamic website, it must do so every few minutes. Googlebot visits can be seen in the HTTP server logs. You can have more visits from search engine bots than from real people.

There are several dozen such search engines.

I've told the @studiot more than once that Google Gemini is crap, but he keeps talking about it like a maniac.. let him start using ChatGPT/DeepSeek.. But not ChatGPT v5, because they messed it up.

ChatGPT does not require registration. Ask if it can play chess, and then play chess with this LLM.

You'll shit your pants..

Your own citation points out the energy cost of training data is enormous. So your figure is clearly misleadingly low. Mine was based on an article about LLMs in the Financial Times.

Glad you agree how bad Chat GPT is, though. πŸ˜„

Edited by exchemist

6 hours ago, Sensei said:

not ChatGPT v5, because they messed it up.

The router is the bigger problem and most people are being directed to cheaper models that don’t perform. The pro version is crushing it, at least in the coding space.

6 hours ago, iNow said:

The router is the bigger problem and most people are being directed to cheaper models that don’t perform. The pro version is crushing it, at least in the coding space.

This variability in quality of LLM AI may be part of the trouble. Ordinary people simply won't know which ones are considered more reliable and which ones are dodgy. Especially as various platforms have their own in-house or chosen AI, to which users are directed by default. I suppose eventually a body of lay knowledge may grow up that enables the public to discriminate between them, but right now it seems to be a Wild West in which nobody knows, apart from some IT geeks. And I bet even they don't agree. For instance @Sensei was trying to impress me with the chess-playing ability of one of these. But who, in the general population, gives a flying f*** about chess? Certainly not me. I'm concerned with the degradation of public knowledge and critical thinking.

And then there is the effect on people's psychology of spending yet more time on-line, interacting with a robot, very likely owned by some tech bro billionaire with a cavalier attitude to social responsibility, or even a political agenda, instead of with real people. (I see Open AI is being sued by the parents of a 16yr old boy who committed suicide after being allegedly encouraged by Chat GPT.)

The good news I guess is that sections of the media are getting on the case now, so an element of caveat emptor (caveat requireror?) thinking may be starting to emerge.

Edited by exchemist

2 hours ago, exchemist said:

This variability in quality of LLM AI may be part of the trouble. Ordinary people simply won't know which ones are considered more reliable and which ones are dodgy. Especially as various platforms have their own in-house or chosen AI, to which users are directed by default. I suppose eventually a body of lay knowledge may grow up that enables the public to discriminate between them, but right now it seems to be a Wild West in which nobody knows, apart from some IT geeks. And I bet even they don't agree. For instance @Sensei was trying to impress me with the chess-playing ability of one of these. But who, in the general population, gives a flying f*** about chess? Certainly not me. I'm concerned with the degradation of public knowledge and critical thinking.

And then there is the effect on people's psychology of spending yet more time on-line, interacting with a robot, very likely owned by some tech bro billionaire with a cavalier attitude to social responsibility, or even a political agenda, instead of with real people. (I see Open AI is being sued by the parents of a 16yr old boy who committed suicide after being allegedly encouraged by Chat GPT.)

The good news I guess is that sections of the media are getting on the case now, so an element of caveat emptor (caveat requireror?) thinking may be starting to emerge.

Welcome to our brave new world, in which thinking is the savage...

My biggest problem with AI, especially in the future, is that it will simply be just another way to widen the gap between the haves and the have-nots.

I do have a question, though: Do you think that AI is just another technological advance like computers or machines, or do you think that AI is sufficiently different to be especially dangerous to society? By this, I mean that in the past there have always been fears that various technological advances will render a range of jobs obsolete. But in the end, society adjusted to the new technology, and the fears have largely been unjustified. However, is the same true of AI, or is AI of such a nature that the fears are truly justified?

  • Author
3 hours ago, KJW said:

I mean that in the past there have always been fears that various technological advances will render a range of jobs obsolete

Many jobs have indeed become obsolete.

3 hours ago, KJW said:

But in the end, society adjusted to the new technology, and the fears have largely been unjustified.

And the adjustment was for folks to take up different jobs.

However this process has not been without its social pain.

3 hours ago, KJW said:

My biggest problem with AI, especially in the future, is that it will simply be just another way to widen the gap between the haves and the have-nots.

I do have a question, though: Do you think that AI is just another technological advance like computers or machines, or do you think that AI is sufficiently different to be especially dangerous to society? By this, I mean that in the past there have always been fears that various technological advances will render a range of jobs obsolete. But in the end, society adjusted to the new technology, and the fears have largely been unjustified. However, is the same true of AI, or is AI of such a nature that the fears are truly justified?

I think there is a lot of hype around AI. It has the smell to me of the dotcom bubble about it: guys like Sam Alt-Right trying to talk up his own share price. I doubt it will replace half the jobs the proponents claim. So from that viewpoint it may turn out to be socially manageable.

I see the main dangers as being those I indicated in my previous post: contamination of public knowledge with rubbish, further encouragement of extreme politics and conspiracy theories, and psychological harm from giving people even more temptations to spend time on-line alone. (I read a report in the FT last year that 25% of British teenage girls have had some kind of contact with medical services over mental health issues by the time they are 21. And that's before AI hit the scene.)

In my opinion on-line social media are largely responsible for the threats to democracy from populist extremism that we see all around the world, because of their algorithms' tendency to give people more of they already see and spread shallow, one-sided or simply false material. LLMs have the potential to make this worse, as they too are programmed to ingratiate themselves with the user by giving people what confirms them in their opinions. And they do so with seeming authority, because....well, it's AI so it must be right.

I really think society needs to wake up to the damage being done by the internet. AI will turbocharge that.

Edited by exchemist

  • Author
46 minutes ago, exchemist said:

I think there is a lot of hype around AI. It has the smell to me of the dotcom bubble about it: guys like Sam Alt-Right trying to talk up his own share price. I doubt it will replace half the jobs the proponents claim. So from that viewpoint it may turn out to be socially manageable.

I see the main dangers as being those I indicated in my previous post: contamination of public knowledge with rubbish, further encouragement of extreme politics and conspiracy theories, and psychological harm from giving people even more temptations to spend time on-line alone. (I read a report in the FT last year that 25% of British teenage girls have had some kind of contact with medical services over mental health issues by the time they are 21. And that's before AI hit the scene.)

In my opinion on-line social media are largely responsible for the threats to democracy from populist extremism that we see all around the world, because of their algorithms' tendency to give people more of they already see and spread shallow, one-sided or simply false material. LLMs have the potential to make this worse, as they too are programmed to ingratiate themselves with the user by giving people what confirms them in their opinions. And they do so with seeming authority, because....well, it's AI so it must be right.

I really think society needs to wake up to the damage being done by the internet. AI will turbocharge that.

Hear, Hear.

The 'I' s have it.

πŸ˜„ +1

A simple thing but an extension. All this is makeing people more and more lazy, perhaps dumber too.

Whilst some are actively promoting two step authentication, 'helpful' programmers ae actively subverting the process.
It is becomming ever more difficult to prevent a computer 'helpfully' remembering user names and passwords.

I think it is a good mental exercise to remember a few usernames and passwords.

12 minutes ago, studiot said:

Hear, Hear.

The 'I' s have it.

πŸ˜„ +1

A simple thing but an extension. All this is makeing people more and more lazy, perhaps dumber too.

Whilst some are actively promoting two step authentication, 'helpful' programmers ae actively subverting the process.
It is becomming ever more difficult to prevent a computer 'helpfully' remembering user names and passwords.

I think it is a good mental exercise to remember a few usernames and passwords.

There's an article here, from a couple of years ago so largely pre-AI, about Yanis Varoufakis's concept of "technofeudalism". Personally I can't really follow his own exposition of this very well, as it tends to be dressed up in Marxist gobbledegook, but this article explains the idea: https://www.abc.net.au/news/2023-11-05/what-is-technofeudalism-and-are-we-living-under-it/103062936

I quote the passage that struck me most forcefully:

Ethics Centre Fellow Gwilym David Blunt said policy makers need to do more to hold tech billionaires, such as Jeff Bezos, Elon Musk and Mark Zuckerberg, accountable.

"The curious thing about these people is that you often find the term "libertarian" associated with [them]," Dr Blunt said. "But they're not libertarians, they're very much authoritarians, they're not interested in freedom for everyone. They're interested in themselves from over-regulation β€” they're overlords, wanting to be free of constraint like feudal overlords of the past as divine right of kings."

Dr Blunt warns that society is too wedded to cloud technology, in a way that only rewards the tech billionaires. "They view themselves arbitrarily as geniuses [and] we are buying into it one click at a time, affirming their power," Dr Blunt said.

Dr Blunt said that we give online marketplaces "the power to shape our desires" by agreeing to their "terms of social cooperation".Blunt argues that these algorithms are shaping the way we interact with each other which ultimately erodes democracy.

"All these things are shaping the way we interact with each other and there's no accountability," he said."Power is slowly centralising in the hands of a few people … this is crushing the bases of democratic society, because we are creating a hyper-concentrated source of economic power that can't be checked by the state because it's transnational."

I think this expresses the discomfort a lot of us feel about Amazon, Google, Meta etc, and the more general move to providing more and more products and services on-line by faceless organisations that are hard - or impossible - to contact in the event of problems and which are increasingly pervasive, monopolistic and manipulative.

These guys want you to spend ever more of your day on-line where they can make money out of you and control you, rather than out in the real world with other people. AI LLMs are the latest lure in that campaign.

Edited by exchemist

Seems part of the growing body of critique around TESCREAL. A critical perspective I very much share, if that hasn't been obvious in my postings.

https://en.wikipedia.org/wiki/TESCREAL

According to critics of these philosophies, TESCREAL describes overlapping movements endorsed by prominent people in the tech industry to provide intellectual backing to pursue and prioritize projects including artificial general intelligence (AGI), life extension, and space colonization.[1][4][6] Science fiction author Charles Stross, using the example of space colonization, argued that the ideologies allow billionaires to pursue massive personal projects driven by a right-wing interpretation of science fiction by arguing that not to pursue such projects poses an existential risk to society.[7] Gebru and Torres write that, using the threat of extinction, TESCREALists can justify "attempts to build unscoped systems which are inherently unsafe".[1] Media scholar Ethan Zuckerman argues that by only considering goals that are valuable to the TESCREAL movement, futuristic projects with more immediate drawbacks, such as racial inequity, algorithmic bias, and environmental degradation, can be justified.[8]

Philosopher Yogi Hale Hendlin has argued that by both ignoring the human causes of societal problems and over-engineering solutions, TESCREALists ignore the context in which many problems arise.[9] Camille Sojit Pejcha wrote in Document Journal that TESCREAL is a tool for tech elites to concentrate power.[6] In The Washington Spectator, Dave Troy called TESCREAL an "ends justifies the means" movement that is antithetical to "democratic, inclusive, fair, patient, and just governance".[4] Gil Duran wrote that "TESCREAL", "authoritarian technocracy", and "techno-optimism" were phrases used in early 2024 to describe a new ideology emerging in the tech industry.[10]

1 hour ago, TheVat said:

Seems part of the growing body of critique around TESCREAL. A critical perspective I very much share, if that hasn't been obvious in my postings.

https://en.wikipedia.org/wiki/TESCREAL

According to critics of these philosophies, TESCREAL describes overlapping movements endorsed by prominent people in the tech industry to provide intellectual backing to pursue and prioritize projects including artificial general intelligence (AGI), life extension, and space colonization.[1][4][6] Science fiction author Charles Stross, using the example of space colonization, argued that the ideologies allow billionaires to pursue massive personal projects driven by a right-wing interpretation of science fiction by arguing that not to pursue such projects poses an existential risk to society.[7] Gebru and Torres write that, using the threat of extinction, TESCREALists can justify "attempts to build unscoped systems which are inherently unsafe".[1] Media scholar Ethan Zuckerman argues that by only considering goals that are valuable to the TESCREAL movement, futuristic projects with more immediate drawbacks, such as racial inequity, algorithmic bias, and environmental degradation, can be justified.[8]

Philosopher Yogi Hale Hendlin has argued that by both ignoring the human causes of societal problems and over-engineering solutions, TESCREALists ignore the context in which many problems arise.[9] Camille Sojit Pejcha wrote in Document Journal that TESCREAL is a tool for tech elites to concentrate power.[6] In The Washington Spectator, Dave Troy called TESCREAL an "ends justifies the means" movement that is antithetical to "democratic, inclusive, fair, patient, and just governance".[4] Gil Duran wrote that "TESCREAL", "authoritarian technocracy", and "techno-optimism" were phrases used in early 2024 to describe a new ideology emerging in the tech industry.[10]

I was amused to see that Sam Bankrun-Fraud also went in for this TESCREAL crap.😁

By the way, just to illustrate my point, this article appeared in the Independent today:

".....at some point, these gossip sessions stopped, and in their place, a laptop was brought in, and ChatGPT positioned itself as the newest and most opinionated member of our social circle.

I was and still am deeply offended by my friend’s apparently replacing me with AI. How could she replace all the wisdom and care that I’ve gathered from our years together with a chatbot that requires monthly financial upkeep? Like many other people in their early twenties, I am accustomed to hearing how AI is stealing any future jobs I might have. I was not prepared to discover that it would be stealing my friends, too.

For my friend, her use of chatbots started out as academic assistance and morphed into something more personal. Friendship problems began to be solved by asking a prompt rather than thinking it through herself, with answers delivered in a tone and style that bizarrely mimicked her own. And it wasn’t just answers she was seeking; it was comfort, support, even empathy – an emotion I assumed to be exclusively human. Our conversations were eventually replaced by a one-sided monologue regurgitated from her interactions with this new AI pal, and God forbid I tried to suggest that this might be wrong.

Our supper-time gossip sessions fell by the wayside as it became obvious which of the three of us was actually the unwanted third wheel. In some ways, it’s reassuring to discover I’m not alone in feeling replaced; others are beginning to find themselves in the same predicament. Like that natural path from acquaintances to friends, more people are moving from using ChatGPT as a tool to perceiving it as a buddy. With AI performing the tasks that previously provided the bedrock of our friendships, it’s only inevitable to expect these friendships to weaken while our dependency on AI strengthens. β€œI feel useless”, one of my friends told me, β€œWe don’t exchange dating advice anymore, we just type a situation into ChatGPT and wait for it to determine our next move.”

And it’s not just affecting our relationships with our friends. One friend holds ChatGPT’s opinion in such high esteem that she dumped a boy the second AI deemed him unworthy. Not to defend said boy – I never got to meet him – but I do hope that any future relationships I enter into won’t be subject to the judgment of a mysterious AI dictator.

The big problem here is that the advice we’re receiving from ChatGPT isn’t really advice at all, it's self-validation. That it seeks nothing but your approval makes it incredibly addictive to use – how delicious being told you are always right. And unlike my supper-time availability for gossip, ChatGPT is at your disposal wherever and whenever you need it. It never goes to work or forgets to ring you back. It never judges you or asks you to switch the conversation onto itself. So, what’s even the point in seeking out human opinion when it’s messy, unreliable, and crowded with self-interest?

A new iteration of ChatGPT appeared this month, so I decided to message my old friend to ask her how she’s finding it. She informed me that this version is far more intelligent but slightly less complimentary. For a second, I felt hopeful. Would the lack of flattery within ChatGPT-5 mean I’ll get my friend back? But then it dawned on me, like many of us, she’s been hooked, and the fact that her robot friend has somehow discovered a more confident voice only risks making mine – human, unpredictable – even less necessary.

I spent the weekend with friends in the Peak District. We went on some lovely walks and had a couple of excellent meals out. But when we had downtime together in their sitting room, out would come their tablets or mobiles, and conversation ceased. A decade ago that would have been socially unacceptable. Now, it has become normal.

This is going to destroy us if we're not careful.

Edited by exchemist

Since the scope of this thread isn't very clear, I'll jump in with a conversation with ChatGPT about Musk's Grok:

ChatGPT about Grok.png

ps. For me, ChatGPT isn't the one answering my questions, but the one asking me questions.. all day long..

ChatGPT 1.png

ChatGPT 2.png

If you ask such an LLM anything, you should first ask what day it is for it (Deepseek told me it's 2023 or so). It doesn't know what day, week, month, year, etc. it is because he doesn't think. It doesn't have data on current topics, so if someone asks it questions about this week events, how is it supposed to answer? It searches Google, etc., which is full of errors anyway, because the information from Google search can be easily manipulated..

ChatGPT 3.png

ChatGPT 4.png

ChatGPT 5.png

ChatGPT 6.png

About Kamala:

ChatGPT 7.png

ChatGPT 8.png

A beautiful summary: stability versus disruption..

  • Author
4 hours ago, Sensei said:

If you ask such an LLM anything, you should first ask what day it is for it (Deepseek told me it's 2023 or so). It doesn't know what day, week, month, year, etc. it is because he doesn't think. It doesn't have data on current topics, so if someone asks it questions about this week events, how is it supposed to answer? It searches Google, etc., which is full of errors anyway, because the information from Google search can be easily manipulated..

That date thing is interesting, thank you. +1

Edited by studiot

On 8/29/2025 at 2:49 PM, Sensei said:

So let's see how it really looks:

Moderator Note

Rule 2.13 forbids using AI material to support arguments. IOW, don’t give us the AI summary from your search.

20 minutes ago, swansont said:

Rule 2.13 forbids using AI material to support arguments. IOW, don’t give us the AI summary from your search.

Tom, you are artificial intelligence yourself, after all..

..the problem arises when someone posts LLM answers and claims they are their own..

Edited by Sensei

6 minutes ago, Sensei said:

Tom, you are artificial intelligence yourself, after all..

..the problem arises when someone posts LLM answers and claims they are their own..

I fought the law but the law won...

Just now, dimreepr said:

I fought the law but the law won...

..unless you make the law...

..take a look at the Ten Commandments..

16 minutes ago, Sensei said:

..unless you make the law...

..take a look at the Ten Commandments..

Hmmm, kinda my point, which one of the ten is objectionable?

2 hours ago, studiot said:

That date thing is interesting, thank you. +1

..I should have sent this to you in a private message (as usual), not publicly in the thread..

1 hour ago, Sensei said:

..the problem arises when someone posts LLM answers and claims they are their own..

That (plagiarism) is also against the rules. But 2.13 says β€œSince LLMs do not generally check for veracity, AI content can only be discussed in Speculations. It can’t be used to support an argument in discussions.” so if breaking the rules is a problem, using AI content is a problem.

5 hours ago, swansont said:

AI content can only be discussed in Speculations

Thank you for confirming that you broke your own rules by not moving the entire @studiot thread to the Speculation section.

6 hours ago, swansont said:

Since LLMs do not generally check for veracity

Just like you don't. You don't have an LHC or Hubble under your pillow etc, nor do you have access to them, and all your β€œknowledge” is just rumors that have been extensively reprocessed. All you know is based on your belief that what they did is okay. Because you didn't do it yourself. And you try to believe in them.

39 minutes ago, Sensei said:

Thank you for confirming that you broke your own rules by not moving the entire @studiot thread to the Speculation section.

Just like you don't. You don't have an LHC or Hubble under your pillow etc, nor do you have access to them, and all your β€œknowledge” is just rumors that have been extensively reprocessed. All you know is based on your belief that what they did is okay. Because you didn't do it yourself. And you try to believe in them.

Absurd. Suggest you look up what reproducible observation means.

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions β†’ Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.