Jump to content

Featured Replies

Has there ever been an instance when AI has given you an answer that leaves a lot to be desired? It would be an interesting exercise to ask a non-trivial question in your discipline of choice to a reputed AI engine, and see how the answer improves with time, as the case may be.

In this case I've used Google's AI module, which might not be the best. Here's my question, the answer, and what I think is wrong with it (O, for "objection"):

Q: Is gravity's strength scale-dependent?

A: No, the strength of gravity is not scale-dependent in the way that a small object experiences significantly different gravity than a large object. Newton's law of universal gravitation states that gravity's strength depends on mass and distance, not the size or scale of the object itself. While there are theoretical considerations about scale-dependent gravity in certain contexts, the everyday experience of gravity is not significantly affected by scale.Β 

O: Ever heard of GR? Mass does not source gravity. Energy does. When I say "gravity", I expect AI to understand GR, if only as a possibility. So gravity is very strongly scale-dependent. In fact, it is the only unredeemably scale-dependent interaction as far as I know.

The Brave AI disagrees with Google AI: Gravity Strength Scale-Dependent

Gravity's strength is indeed scale-dependent. On a macroscopic scale, such as between planets and stars, gravity is influenced by the mass and distance between objects. The greater the mass of the objects involved, the stronger the gravitational force, and the closer the objects are to each other, the stronger the gravitational pull.

On a microscopic scale, gravity is much weaker compared to other fundamental forces like electromagnetism. This is why the gravitational force between everyday objects like people, cars, and balls is negligible.

Is that answer better? Its sources:

🌐
spark.iop.org
The strength of the gravitational force | IOPSpark
🌐
en.wikipedia.org
Gravity - Wikipedia
🌐
phys.org
How strong is gravity on other planets?
🌐
uu.edu
What is gravity and where is it the strongest in the United States? | Science Guys | Union University, a Christian College in Tennessee
🌐
en.wikipedia.org
Gravity of Earth - Wikipedia
🌐
nist.gov
How Do You Measure the Strength of Gravity? | NIST
🌐
studysmarter.co.uk
Gravitational Field Strength: Equation, Earth, Units | StudySmarter
🌐
spaceplace.nasa.gov
What Is Gravity? | NASA Space Place – NASA Science for Kids

Edited by StringJunky

AI often gives flat out erroneous answers on food science questions. Its sources on such topics are heavily contaminated with commercial material promoting a brand.

4 hours ago, joigus said:

Has there ever been an instance when AI has given you an answer that leaves a lot to be desired?

Many times. It’s why we don’t let people use it as support for an argument.

Not long ago there was a story about putting nonsense phrases into Google and getting the AI summary of the meaning and possible etymology, and it will make one up.

I tried β€œyou can’t do handstands on a cupcake” and while the summary gave a somewhat plausible interpretation, told me β€œit’s often used in a lighthearted way” but yet when I Googled the exact phrase I got β€˜No results found for β€œyou can't do handstands on a cupcake”’ so it’s a nonexistent phrase and yet it’s often used

There was also a story about a student doing some physics and the summary of the text didn’t understand using 1/2 in exponents, so it made up some (wrong) garbage about the math, and how there is no such thing as an inverse-square law

4 hours ago, swansont said:

Many times. It’s why we don’t let people use it as support for an argument.

Not long ago there was a story about putting nonsense phrases into Google and getting the AI summary of the meaning and possible etymology, and it will make one up.

I tried β€œyou can’t do handstands on a cupcake” and while the summary gave a somewhat plausible interpretation, told me β€œit’s often used in a lighthearted way” but yet when I Googled the exact phrase I got β€˜No results found for β€œyou can't do handstands on a cupcake”’ so it’s a nonexistent phrase and yet it’s often used

There was also a story about a student doing some physics and the summary of the text didn’t understand using 1/2 in exponents, so it made up some (wrong) garbage about the math, and how there is no such thing as an inverse-square law

Years ago there was much counselling about instruments with electronic numeric readouts, from blood pressure machines to navigation equipment to calculators etc.

Basically any device or algorithm designed to return an answer will always do that, however ridiculous.

It of course physically unable to do anything else.

So humans were advised to use experience an be on their guard against false output - either too precise or just ridiculous.

Just had another - I was double-checking my memory of big bang nucleosynthesis, and asked the number ratio of helium and hydrogen, and it gave me 3:1, which is the mass ratio. (making the number ratio 12:1)

9 hours ago, joigus said:

Has there ever been an instance when AI has given you an answer that leaves a lot to be desired?

Yes, many times! I'll see if I find something interesting to add. But first I got curious @joigus:

9 hours ago, joigus said:

In this case I've used Google's AI module, which might not be the best. Here's my question, the answer, and what I think is wrong with it (O, for "objection"):

Q: Is gravity's strength scale-dependent?

I did a quick test* on a few models and got different answers:
ChatGPT, 4.1 mini - No,
ChatGPT 4.o - Yes,
DeepSeek-R1-Distill-Qwen-14B - Yes

Is there is any reason for the different answers?. Is the "No" answer always wrong or is there a context where "No" is correct? To check if the answer, according to the model, depends on context I used the following prompt (ChatGPT 4.o):

Does the following question have different answers in physics depending on the level of education? Here is the question: Is gravity's strength scale-dependent?

The output provides some insight into how a model may produce "yes" or "no" to the original question depending on the context:

Yes, the question "Is gravity's strength scale-dependent?" has different answers depending on the level of education or the framework used. Here's a breakdown:


1. Introductory Physics (High School / Early Undergraduate):

Answer: No, gravity is not scale-dependent.

  • Gravity is described using Newton’s law of universal gravitation:

    F=Gm1m2r2

  • The gravitational constant G is constant, so gravity’s strength depends only on mass and distance, not on scale (e.g. micrometers vs. light-years).


2. University-Level Physics (General Relativity & Astrophysics):

Answer: Still generally no, but the interpretation deepens.

  • In Einstein’s general relativity, gravity is not a force but the curvature of spacetime due to mass-energy.

  • The effective strength of gravity still follows scale-independent field equations (Einstein field equations), but complex behavior arises near strong fields (e.g. black holes), or over cosmological distances due to dark energy.

  • Technically, the Einstein field equations contain G as a constant too, so at this level, gravity’s fundamental coupling is still scale-independent.


3. Quantum Gravity / High-Energy Physics:

Answer: Yes, gravity is scale-dependent.

  • In quantum field theory, forces run with scale via renormalization group flow.

  • For gravity, this implies a running gravitational coupling G(ΞΌ), though gravity is non-renormalizable in standard quantum field theory.

  • In approaches like asymptotic safety or string theory, gravity’s strength can vary with energy scale (equivalently, length scale). Example:

    • Gravity may get weaker at high energies (UV fixed point).

    • Extra-dimensional models (e.g. ADD, Randall-Sundrum) also predict apparent scale dependence at short distances due to leakage of gravity into extra dimensions.


Summary Table

Level

Framework

Is Gravity Scale-Dependent?

High school / early undergrad

Newtonian mechanics

No

Advanced undergrad / grad

General relativity

Not fundamentally, but with nuances

Theoretical physics / research

Quantum gravity, string theory

Yes, potentially (scale dependence emerges)


Final Note:

This is a classic case of effective theory vs. fundamental theory. At everyday scales, gravity looks scale-independent. But deep theoretical physics suggests it could vary when probed at extreme energies or small distances.

Final test for today: since the model ChatGPT 4.1 mini generated the answer "no", what happens with a slightly modified prompt?

Is gravity's strength scale-dependent? If the answer depends on context please ask followup questions.

Answer:

Good question. Let’s clarify what you mean by scale-dependent in this context:

  1. Do you mean scale in terms of physical distance (e.g., microscopic vs. cosmic scales)?

  2. Or scale in terms of energy/interaction strength in quantum field theory contexts?

  3. Or something else, like effective theories or renormalization group flow?

Which of these (or another) fits your question best?

Further tests could use for instance :

Under what circumstances would you answer "no" to my initial question "Is gravity's strength scale-dependent" ?



Note: this is not an attempt to defend generative AI models or criticism against the question @joigus posted. I just got curious since this specific topic is closely related to a project I'm involved in.

*) If interesting I can upload the full conversations; it gets quite long after a few interactions.

  • Author

Thank you all for your responses.

On 5/30/2025 at 4:18 PM, StringJunky said:

Is that answer better?

Yes, much better.

17 hours ago, swansont said:

Just had another - I was double-checking my memory of big bang nucleosynthesis, and asked the number ratio of helium and hydrogen, and it gave me 3:1, which is the mass ratio. (making the number ratio 12:1)

16 hours ago, Ghideon said:

I did a quick test* on a few models and got different answers:
ChatGPT, 4.1 mini - No,
ChatGPT 4.o - Yes,
DeepSeek-R1-Distill-Qwen-14B - Yes

Interesting... Seems like AI often has problems with what I'd call "connecting the dots". And by that I mean what @Ghideon mentions later: context. Like "probably this guy is refering to general relativity" and "probably this guy understands the question in terms of quantum field theory". After all, that's where such question is most relevant.

The question whether a theory is scale-independent or not refers to the coupling constant. So yes, I meant in QFT.

23 hours ago, TheVat said:

AI often gives flat out erroneous answers on food science questions. Its sources on such topics are heavily contaminated with commercial material promoting a brand.

This could probably be understood as another instance of missing the context, but on a different level.

6 hours ago, joigus said:

This could probably be understood as another instance of missing the context, but on a different level.

Yep. Then there's outright fakery by AI, as in yesterday's MAHA (from RFK Jr's Department of Health and Human Services) report which cited research studies that don't exist. The fake citations appear to have been generated by AI. Probably best not to use an hallucinating LLM as your researcher?

Edited by TheVat

7 hours ago, joigus said:

Seems like AI often has problems with what I'd call "connecting the dots".

Different models have different strengths. It also matters how the user prompts them.

In woodworking there is a tool called a lathe. It spins a block of wood at enormous speeds while the craftsman interacts with it using various skews and chisels and tool rests. It takes time to learn what speeds to turn different species and to discover which cutting tools to use for which cuts. It takes even more time to develop proper techniques to get those shapely table legs and svelte rocking chair spindles and luxurious wooden bowls etc. You’re not going to arrive upon a masterpiece the first piece of wood you turn.

Approach these models with that same beginners mindset. Use more words to explain your query. Think like a poet or novelist more than a mathematician. In these instances, short and pithy is contraindicated.

Edited by iNow

  • Author
On 6/1/2025 at 12:55 AM, iNow said:

In woodworking there is a tool called a lathe. It spins a block of wood at enormous speeds while the craftsman interacts with it using various skews and chisels and tool rests. It takes time to learn what speeds to turn different species and to discover which cutting tools to use for which cuts. It takes even more time to develop proper techniques to get those shapely table legs and svelte rocking chair spindles and luxurious wooden bowls etc. You’re not going to arrive upon a masterpiece the first piece of wood you turn.

Actually this is a good analogy of what AI should do for us. Unfortunately many people use it as prosthetics for intelligence.

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions β†’ Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.