Jump to content

Featured Replies

2 minutes ago, exchemist said:

Absurd. Suggest you look up what reproducible observation means.

Knock yourself on the head. That's not what we're talking about. Can you read and understand the text? Someone is conducting these experiments, you (personally and literally you) believe in them, and you're just reading about them on the Internet..

All you know is that you believe they did their job well. You cannot obtain their data at home. Their experiments are too sophisticated, too complicated to repeat at home.

Edited by Sensei

4 hours ago, Sensei said:

Thank you for confirming that you broke your own rules by not moving the entire @studiot thread to the Speculation section.

Not sure what you hope to accomplish with this.

4 hours ago, Sensei said:

Just like you don't. You don't have an LHC or Hubble under your pillow etc, nor do you have access to them, and all your “knowledge” is just rumors that have been extensively reprocessed. All you know is based on your belief that what they did is okay. Because you didn't do it yourself. And you try to believe in them.

Yeah, you brought this same nonsense up when we discussed the policy.

https://scienceforums.net/topic/133849-aillm-policy-discussion/

It was bollocks then, and nothing has changed

3 hours ago, Sensei said:

Knock yourself on the head. That's not what we're talking about. Can you read and understand the text? Someone is conducting these experiments, you (personally and literally you) believe in them, and you're just reading about them on the Internet..

All you know is that you believe they did their job well. You cannot obtain their data at home. Their experiments are too sophisticated, too complicated to repeat at home.

It may not be what you’re talking about, but it’s part of the discussion. The logical conclusion from this “belief” assertion is that science is a massive conspiracy and we’re making it all up. Because that’s where repeatability/reproducibility enters into it. If the experiment is made up, then so must the next one that reproduces the result, or builds on it. A huge house of cards. But technology is built on it, too, and it works.

The issue is not belief but trust - a matter of credibility - and the fact that the technology works, and the experiments you can do agree with other results, builds trust in the science you can’t personally check. e.g. GPS actually works. They aren’t faking it, and it’s not just some happy accident that it works.

With an AI result, we don’t know where it’s coming from, or if it’s a hallucination. If we did know the source, then you can cite that instead. Then everybody can see/decide if it’s a credible source.

10 hours ago, Sensei said:

Knock yourself on the head. That's not what we're talking about. Can you read and understand the text? Someone is conducting these experiments, you (personally and literally you) believe in them, and you're just reading about them on the Internet..

All you know is that you believe they did their job well. You cannot obtain their data at home. Their experiments are too sophisticated, too complicated to repeat at home.

Yes I can understand the text. I can also discern its implication.

Your argument here reminds me a bit of Dawkins's quip: "Show me a cultural relativist at 30,000ft and I'll show you a hypocrite." While you are not espousing cultural relativism, you are in effect claiming there is no such thing as a reliable body of human knowledge, only an individual's own direct experience. That is a (rather destructive) form of nihilism. It is also simply untenable, for the reasons @swansont has explained. Science has a methodology, by design, that accepts knowledge verified by more than one human being. We thus rely on one another's experiences and build models based on a consensus of what those experiences seem to be telling us. This is what makes science reliable - reliable enough for us to fly at 30,000ft, for instance.

Something similar, though not assessed so formally, is also what makes certain sources of information credible, i.e. they have been found to be so by many human beings, over a period of time. That is why on forums like these we often ask for sources to back up claims, so that we can judge the claims based on the quality of the sources.

Whereas, as we know, LLMs disturbingly often make shit up, or rely on bad sources.

Edited by exchemist

13 hours ago, Sensei said:

Just like you don't. You don't have an LHC or Hubble under your pillow etc, nor do you have access to them, and all your “knowledge” is just rumors that have been extensively reprocessed. All you know is based on your belief that what they did is okay. Because you didn't do it yourself. And you try to believe in them.

You seem to have an almost religious obsession with your AI chums, it's difficult to work out what side of the fence you're on, wrt the topic?

8 hours ago, exchemist said:

Whereas, as we know, LLMs disturbingly often make shit up, or rely on bad sources.

And they deliver it with absolute confidence.

18 hours ago, Sensei said:

Knock yourself on the head. That's not what we're talking about. Can you read and understand the text? Someone is conducting these experiments, you (personally and literally you) believe in them, and you're just reading about them on the Internet..

All you know is that you believe they did their job well. You cannot obtain their data at home. Their experiments are too sophisticated, too complicated to repeat at home.

You reproduce the experiment and see if the results tally. The more people that perform the experiment with concurring results, the greater the communal confidence in the outcome that this is how nature behaves. If it reaches a sufficient level of expert consensus it becomes accepted as part of the scientific corpus and is called a 'Theory'. This is scientific method 101.

Edited by StringJunky

On 9/4/2025 at 6:59 PM, StringJunky said:

You reproduce the experiment and see if the results tally. The more people that perform the experiment with concurring results, the greater the communal confidence in the outcome that this is how nature behaves. If it reaches a sufficient level of expert consensus it becomes accepted as part of the scientific corpus and is called a 'Theory'. This is scientific method 101.

I think you haven't read what we were talking about. No one has an LHC or Hubble at home... or the ability to verify their data, etc. So you're talking about something different than what we're talking about...

I don't deny science, quantum physics, or anything like that. I simply stated the fact that some random person who has no idea about he/she reads (if they can :) ) things... and it's exactly the same as LLM reading the same things..

Ordinary people don't have an LHC or a Hubble telescope at home.

LLM doesn't either.

So all the data that LLM has, and that an ordinary person like Swanson has, is the same data...

ps. Why are you dumber than ChatGPT (not you StringJunky) ? I don't get it!

I didn't give you the example of “play chess with ChatGPT” to show you how pathetic you are..

exchemist answered, “I don't play chess”... what kind of idiot doesn't know how to play chess?

No wonder he's afraid to use LLM..

In the 1990s, people were excited that “A.I.” would beat the chess champion, and now this ex-chemist is shitting his pants..

What a day! What an age!

How can people from such an ‘elite’ group not be afraid of all these 'a.i.'s?

If someone considers themselves to be elite and smart, they should be the first to play chess with such an 'a.i.'

Edited by Sensei

4 hours ago, Sensei said:

So all the data that LLM has, and that an ordinary person like Swanson has, is the same data...

Nope, it's data without contextual understanding.

4 hours ago, Sensei said:

I think you haven't read what we were talking about. No one has an LHC or Hubble at home... or the ability to verify their data, etc. So you're talking about something different than what we're talking about...

I think you don't understand what you're talking about, cold fusion puts a stick in your spokes, for a variety of reasons, not least of which is the robustness of the scientific process against any thought of some sort of conspiracy.

6 hours ago, Sensei said:

I think you haven't read what we were talking about. No one has an LHC or Hubble at home... or the ability to verify their data, etc. So you're talking about something different than what we're talking about...

I don't deny science, quantum physics, or anything like that. I simply stated the fact that some random person who has no idea about he/she reads (if they can :) ) things... and it's exactly the same as LLM reading the same things..

Ordinary people don't have an LHC or a Hubble telescope at home.

LLM doesn't either.

So all the data that LLM has, and that an ordinary person like Swanson has, is the same data...

I don’t know what your point is, and you apparently don’t know what mine is. Not having the LHC or Hubble at our disposal is completely irrelevant. If you cite an AI as a response, nobody knows where the information came from, so it could have come from an unreliable source, the AI could have botched the inquiry, or it could have hallucinated the response. If you give a link that goes to a paper written by a research group doing work at the LHC, then people know it came from LHC scientists. If you link to a crackpot’s website, people will know that. Because crackpot information is also accessible to LLMs. I have access to crackpot sources, too, but I know not to use them

Citations are about revealing the source of the information.

Not long ago you could ask Google if lawyers were human, and the AI summary would tell you they were not. Consumer-grade LLMs are not credible sources, so they are not allowed to be used as if they were.

6 hours ago, Sensei said:

ps. Why are you dumber than ChatGPT (not you StringJunky) ? I don't get it!

I didn't give you the example of “play chess with ChatGPT” to show you how pathetic you are..

exchemist answered, “I don't play chess”... what kind of idiot doesn't know how to play chess?

No wonder he's afraid to use LLM..

Moderator Note

This seems like you want to be suspended or banned. Is this the case, is civility beyond your reach now? It's a weak argument that resorts to insult. I think you're better than this.

6 hours ago, Sensei said:

How can people from such an ‘elite’ group not be afraid of all these 'a.i.'s?

You've been posting contrary arguments in this thread, chastising others for not embracing AI in one post and then posting something like this, where you seem to be saying "You SHOULD be very afraid!"

You also switch from talking about scientists and members here to "some random person" as if the two were equivalent wrt scientific knowledge.

Perhaps this inconsistency has led to a lack of understanding and a lot of frustration. In a discussion, it's your job to clarify your position, and so far you've been leaning heavily on sarcasm and insult and skimping on persuasion and explanation.

Please stop this if you want to keep discussing this topic.

Back to the OP; yesterday I was searching to find a song used in a particular episode of a TV show.

Mostly all I could locate was other people also trying to find that same song.

Eventually the AI summary on the search results said it was "Song Name" from the band "Band Name".

Couldn't find that, or them, on Spotify, so tried searching for the song/band with those provided names.

The AI summary then said "There are no matches for that, it doesn't exist".

Sigh.

Edited by pzkpfw

1 hour ago, pzkpfw said:

I was searching to find a song used in a particular episode of a TV show.

If you’re still looking, the Internet Movie Database often has soundtrack info for TV episodes.

48 minutes ago, swansont said:

If you’re still looking, the Internet Movie Database often has soundtrack info for TV episodes.

Thank you, but did try there and no luck.

19 hours ago, pzkpfw said:

I was searching to find a song used in a particular episode of a TV show.

If you have a recording of the episode, you could try Shazam.

1 hour ago, KJW said:

If you have a recording of the episode, you could try Shazam.

Another good suggestion, however in one of the places I found people asking the same question, somebody mentioned they'd tried Shazam and had no luck.

(Maybe a fresh try would get a different result ...)

20 hours ago, pzkpfw said:

Eventually the AI summary on the search results said it was "Song Name" from the band "Band Name".

May I ask what you used as input in your searches?

(Curious; it may be an useful example in a work-related presentation)

47 minutes ago, Ghideon said:

May I ask what you used as input in your searches?

(Curious; it may be an useful example in a work-related presentation)

Sorry, I don't remember in enough detail for it to be useful, probably not able to be replicated exactly.

I was googling with various combinations of the name of the show, the episode, and snippets of the lyrics I remembered. Sometimes just the lyrics, to try a music search distinct from the show. Later attempts used more lyrics I got from pages I found where people were asking the same question.

(For the record, it was a song used in the middle of episode 20, season 5, of "The Rookie".)

I use AI daily in my work now (programmer, mostly C#, mostly back end). Sometimes I'm astounded by AI code suggestions, but often it's easy to see where they come from - e.g. I just added a property to a model, now I go to where some code is setting values in an object and code "appears" to set the new property. About half the time though, the suggestions are just bunk. Most useful in general, is where I can ask CoPilot, in English, to give me the code to do something where I more or less already know the answer but getting a syntactically correct snippet to copy paste simply saves time. Hardly ever go to Stack Overflow now.

What's your presentation on?

21 minutes ago, pzkpfw said:

For the record, it was a song used in the middle of episode 20, season 5, of "The Rookie".

But presumably not With the Wind by SUR or Sax by Fleur East

(I remember reading a suggestion that turning on subtitles while watching sometimes identifies a song, in case you watch again)

1 hour ago, pzkpfw said:

Another good suggestion, however in one of the places I found people asking the same question, somebody mentioned they'd tried Shazam and had no luck.

(Maybe a fresh try would get a different result ...)

There are examples of tunes written just for the TV show, which are hard to track down, so that could be the case here

15 minutes ago, swansont said:

But presumably not With the Wind by SUR or Sax by Fleur East

Yes (or, um, no). Those two songs are suggested in a few places, but are not the target.

15 minutes ago, swansont said:

(I remember reading a suggestion that turning on subtitles while watching sometimes identifies a song, in case you watch again)

Oh, might try that. The show is on two different streaming services I have access to, so worth a shot.

15 minutes ago, swansont said:

There are examples of tunes written just for the TV show, which are hard to track down, so that could be the case here

I did consider that, and it could turn out true, but with the lyrics and all it seemed a bit much to be a one-off thing.

6 hours ago, pzkpfw said:

What's your presentation on?

Introduction to AI usage for legal professionals.

Your example is interesting because it is easy to relate to and also opens for multiple lines of reasoning about generative AI.
1: We know an answer exists; the episode do exist and it has music. But it may or may not be included in the training data for the model. The music may be unreleased.
2: There are many different ways to search for the answer, depending on what one knows about the episode, the music or other details that allows a model to infer an answer. Multimodality comes into play; does the model infer the answer from text only, or also audio and video?
3: If the model inference fails; what does it output? In the context of this thread (and in my presentation) is the response "intelligent" or at least useful?
4: Context; how does the level of detail provided to the model affect the answer.

Note: In this specific case I did a quick test and it failed to find the music even with web search enabled. But I got a possibly useful explanation of why it failed* and a suggestion**.

*) Short extract: custom/production-library needle-drop (or bespoke cue) cleared for the episode but not commercially released.
**) contact the musical supervisors. (The AI got the names from the credits of the episode)

Edited by Ghideon
grammar

On 9/6/2025 at 4:36 PM, swansont said:

If you cite an AI as a response, nobody knows where the information came from, so it could have come from an unreliable source,

If you use that crappy Google Gemini (as of September 2025), there is an icon on the right side of the answer that you can click to get a link to the source of the data used in the summary.

I don't think you understand that every source is unreliable. The question is just how much.

When someone uses LLM, they expect 100% certainty. And when the answer comes from a human, you don't expect 100%. Or maybe you're just fooling yourself? I know you wouldn't stand a chance with that crappy ChatGPT on any topic. Unfortunately, they downgraded it, so now it will be worse than it was a few months ago.

That's why I asked you play chess with it, as an example of your human frailty, to show you how mediocre you are compared to that shitty LLM.

If I get an answer from ChatGPT 4o, it will be 99.9% better than any human's answer on the same topic. It has read the entire Wikipedia and all those scientific PDFs. And it remembers everything word for word.

To detect an error in its results, you would have to be a genius and know a lot about the subject. It's damn difficult. Again, we're not talking about that piece of s**t called Google Gemini. When someone mentions “AI”/LLM, they shouldn't even mention that thing.

Every time I use ChatGPT, I have to criticize it for making mistakes.

It's not like “you ask,” it's “okay.”. It gets criticized all the answer. To criticize it, you have to know what you're talking about. “Where's the error handling?!” “Why didn't you do this and that in this and that line?!” To make a damn script in Bash on Linux using ChatGPT, I spent more time scolding it for generating bad code than I would have if I had written it myself. But it was fun! I felt almost like a slave overseer.

If someone doesn't know anything about programming, they would say “wow” after seeing the first (crappy) version.. and wouldn't know that it's crappy..

On 9/13/2025 at 2:48 AM, pzkpfw said:

Back to the OP; yesterday I was searching to find a song used in a particular episode of a TV show.

Mostly all I could locate was other people also trying to find that same song.

Eventually the AI summary on the search results said it was "Song Name" from the band "Band Name".

Couldn't find that, or them, on Spotify, so tried searching for the song/band with those provided names.

The AI summary then said "There are no matches for that, it doesn't exist".

Sigh.

I don't understand why people here are even talking about this crap called Google Gemini. You don't understand how it works at all. It only "reads" what it finds in the search results that pop up just below summary. It can't come up with anything, absolutely nothing, that isn't in those Google search engine results.

If the results of a regular Google search do not contain the right answer, it will not invent it out of thin air.

Use IMDB to check who created the soundtrack for the film, and search for their songs based on their full name.

Besides, why would anyone digitize some prehistoric pieces? You'd have to be some kind of hobbyist of such an author..

See how many years must pass after the death of the heirs for something to become ‘public domain’.

The heirs may not want something to be in the public domain. Spotify is a bad place to look for such things ;)

Today, I had a DNS server installation. The local installation went smoothly without any problems and google searches etc. I expected everything to go as smoothly on the remote server as it did locally. Except that it's a different Linux, a different architecture, and everything else is different. I searched Google classic search. They wrote what to do. It didn't work. So I asked that shitty ChatGPT (asking Google Gemini anything makes no sense whatsoever). After 30-60 minutes of struggling with everything that needed to be tested, one by one, we figured out the options that works. In fact, it is unclear why these configuration options work on one system but not on another. The documentation advised not to use these options at all. But they helped.

@studiot

Bad news. They downgrade ChatGPT's from 4o to 4, so it has even less knowledge and less efficient than we talked about earlier:

ChatGPT.png

When we talked about this earlier, a month or two months ago, I said they were training it until 2023. That's no longer the case.

We are talking about what is free and does not require logging in, etc.

The most annoying change is that it can no longer read what is provided in the link and does not load it with curl/wget etc.

ps. What a beautiful day - I found a beer in the kitchen that I had no idea was there..

9 hours ago, pzkpfw said:

(For the record, it was a song used in the middle of episode 20, season 5, of "The Rookie".)

Which second?

Daddy Cop "The Rookie" would be too easy..

Edited by Sensei

4 hours ago, Sensei said:

If you use that crappy Google Gemini (as of September 2025), there is an icon on the right side of the answer that you can click to get a link to the source of the data used in the summary.

So there’s no excuse for not doing what is required.

4 hours ago, Sensei said:

When someone uses LLM, they expect 100% certainty. And when the answer comes from a human, you don't expect 100%. Or maybe you're just fooling yourself? I know you wouldn't stand a chance with that crappy ChatGPT on any topic. Unfortunately, they downgraded it, so now it will be worse than it was a few months ago.

And it’s these people who are using it and posting AI slop here that precipitated the rule

4 hours ago, Sensei said:

That's why I asked you play chess with it, as an example of your human frailty, to show you how mediocre you are compared to that shitty LLM.

You mentioned chess in response to exchemist, not me. And then got rude about it.

4 hours ago, Sensei said:

If I get an answer from ChatGPT 4o, it will be 99.9% better than any human's answer on the same topic. It has read the entire Wikipedia and all those scientific PDFs. And it remembers everything word for word.

That’s why you don’t ask a random person a question that requires expertise

And one of the issues is people using it for things where the answer isn’t in Wikipedia or the scientific pdfs, because they’re trying to generate a new theory. Or they’re looking for support of an idea that has none. That’s when the LLM makes up an answer, and does what it’s programmed to: making a plausible-sounding answer. It doesn’t care that it’s not a correct answer (because that’s its programming, and it can’t care anyway)

4 hours ago, Sensei said:

To detect an error in its results, you would have to be a genius and know a lot about the subject. It's damn difficult. Again, we're not talking about that piece of s**t called Google Gemini. When someone mentions “AI”/LLM, they shouldn't even mention that thing.

We routinely do detect errors, and I’d thank you for the compliment but it’s not difficult to google a citation and see that there are no results for it, nor does it require extensive knowledge to know that lawyers are indeed human, or to do some simple math that has been botched.

4 hours ago, Sensei said:

Every time I use ChatGPT, I have to criticize it for making mistakes.

I thought you said it gave really good answers.

4 hours ago, Sensei said:

It's not like “you ask,” it's “okay.”. It gets criticized all the answer. To criticize it, you have to know what you're talking about. “Where's the error handling?!” “Why didn't you do this and that in this and that line?!” To make a damn script in Bash on Linux using ChatGPT, I spent more time scolding it for generating bad code than I would have if I had written it myself. But it was fun! I felt almost like a slave overseer.

That’s not a good thing.

Nobody can stop you from using LLMs in unhealthy ways but you can’t use them here to give answers. That’s not changing.

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.