Jump to content

Refutation of a.is regarding gravity that is independent of mass.

Featured Replies

After discussing with a.is in several weeks, they finally agreed with me that gravity is independent of mass and eventually proves that core isn't dense like what people think. I will post some parts of it when I am not busy later. πŸ‘Œ. Be prepared because It's quite long.

26 minutes ago, Lan Todak said:

After discussing with a.is in several weeks, they finally agreed with me that gravity is independent of mass and eventually proves that core isn't dense like what people think. I will post some parts of it when I am not busy later. πŸ‘Œ. Be prepared because It's quite long.

It will be moved to the trash, as such content is against the rules.

25 minutes ago, Lan Todak said:

After discussing with a.is in several weeks, they finally agreed with me that gravity is independent of mass and eventually proves that core isn't dense like what people think. I will post some parts of it when I am not busy later. πŸ‘Œ. Be prepared because It's quite long.

Can you continue discussing this with AI by yourself? I'm more interested in what people who study science have to say about these things. We know the AI is more interested in pleasing you than it is using actual science. Even if it was short I wouldn't be interested. "Quite long" is probably a "framework" for obfuscation.

  • Author
14 hours ago, swansont said:

It will be moved to the trash, as such content is against the rules.

I saw many people post a.i contents here. How is such content against the rules?

14 hours ago, Phi for All said:

Can you continue discussing this with AI by yourself? I'm more interested in what people who study science have to say about these things. We know the AI is more interested in pleasing you than it is using actual science. Even if it was short I wouldn't be interested. "Quite long" is probably a "framework" for obfuscation.

They will please you if you don't filter the contents. You can ask them to be unbiased towards your contents and preferences. They will give you direct answer, clean. Btw, I don't think I will continue 😁.

15 hours ago, Phi for All said:

Can you continue discussing this with AI by yourself? I'm more interested in what people who study science have to say about these things. We know the AI is more interested in pleasing you than it is using actual science. Even if it was short I wouldn't be interested. "Quite long" is probably a "framework" for obfuscation.

Yeah, it will be a "framework" for sure. 😁

I propose to remove the "work" bit from anything speculative coming from AI. It would be just a "frame". That is, the "framework" without the "work".

"I've been given a frame to talk about this" would be at least honest.

2 hours ago, Lan Todak said:

I saw many people post a.i contents here. How is such content against the rules?

Like other sites, we're adapting to the sudden surge in AI generated content. We think it can be used as a LLM for those who can't always put their thoughts down as eloquently as they'd like. Many people, however, use it to draw conclusions, search for evidence, and site their sources, all of which we've proven the AI is incapable of doing honestly.

2 hours ago, Lan Todak said:

They will please you if you don't filter the contents. You can ask them to be unbiased towards your contents and preferences. They will give you direct answer, clean. Btw, I don't think I will continue 😁.

Even this small interaction is much more valuable to me than anything you might do with AI.

I have to admit that I'm heavily biased against AI in general. In the US, preemptive laws are in place that prohibit us from regulating AI companies. I've heard that AI in 2026 will use more water than all the bottled water companies in the US. And it just pisses me off in general since it's obvious the billionaire class hopes AI will eliminate the need to employ actual people. To me, AI is like a fascinating toy that kills people by making them useless.

13 minutes ago, Phi for All said:

AI is like a fascinating toy that kills people by making them useless.

More like 'group think'.
People will never be useless; we are needed to be exploited.
Billionaire owners of these sites are attempting to 'force-feed' us their world view.
( which is everyone is there to work for me and make me more money )

AI is simply the latest institution in a long line, designed so that a privileged few can take advantage of the many.

4 hours ago, Lan Todak said:

I saw many people post a.i contents here.

You should also see lots of moderator notes telling them it’s against the rules, if it was used to make content, and lots of such posts in the trash.

4 hours ago, Lan Todak said:

How is such content against the rules?

Read the rules. 2.13, in particular

On 12/21/2025 at 1:57 AM, Lan Todak said:

they finally agreed with me that gravity is independent of mass and eventually proves that core isn't dense like what people think.

The problem with LLM is that there are many models, even under the same name, with different computing power and different capabilities.

Let me give you an example from last week.

I launched ChatGPT and asked it to convert a piece of C/C++ code (brute-force calculation) into a mathematical function. It was the following piece of code:

#include <stdio.h>
#include <stdlib.h>

int calc_total( int digits ) {
	int result = 10;
	while( digits-- > 1 ) {
		result *= 10;
	}
	return( result );
}

int extract_digit( int value, int digit ) {
	int result = value;
	//printf( "value %d digit %d ", value, digit );
	while( digit-- > 0 ) {
		result /= 10;
	}
	//printf( "result %d\n", result % 10 );
	return( result % 10 );
}

int calc_checksum( int value, int digits ) {
	int result = 0;
	for( int i = 0; i < digits; i++ ) {
		int digit = extract_digit( value, i );
		if( digit == 0 ) return( -1 );
		result += digit;
	}
	return( result % 10 );
}

int main( int argc, const char *argv[] ) {
	if( argc == 2 ) {
		int digits = atoi( argv[ 1 ] );
		if( digits >= 2 ) {
			int total = calc_total( digits );
			printf( "total possible: %d\n", total );
			int count = 0;
			for( int i = 0; i < total; i++ ) {
				if( calc_checksum( i, digits ) == 0 ) {
					count++;
				}
			}
			printf( "checksum possible: %d\n", count );
		}
	}
	return( 0 );
}

If you (reader, whoever you are) are an expert in mathematics/physics, stop reading right away the text below and try to solve this problem yourself as a mathematical puzzle.

If you find this difficult, imagine an LLM doing this.

The first ChatGPT told me f(n)=10^n/10

I answered, NO! you did not take into account that zero is ignored!

It told me: my mistake, first zero is ignored, and gave yet another function. With just first zero skipped (sigh!)

I answered, NO! all zeroes are ignored, not just first one!

It told me: my mistake, f(n)=9^n/10 is the right answer.

I answered, NO! For n=2, 9^2/10 = 81/10 = 8.1 which is a fraction! How can a fraction be the answer?! For n=2 the correct answer should be 9. For n=3, 72.

It agreed with me, and used... some rounding operator...

I lost patience, I asked: what version are you?

It answered: I am ChatGPT-2.

What? WHAT?!

I have never saw ChatGPT-2. How on earth it is here, when it normally runs ChatGPT v4, v4-mini, and v3.5 was the oldest one...

This time it completely locked up, and the only answer it could get was f(n)=9^n/10

It was impossible to skip it. Deadlock.

A day later, at night, when I was expecting ChatGPT servers will be less busy, I asked what version are you.

It told me: ChatGPT v4 with v5.2 engine. Let's test it.

And it gave the correct mathematical answer for my C/C++ algorithm. Which was:

The correct answer

f(n)=(9^n+9*(-1)^n)/10

Shock, it did it!

ps. And did you manage to do it yourself? I doubt it.

The moral of this story is that you have to be an expert in a given field to detect LLM's mistakes, because its answers are very credible, yet often wrong, and it cannot admit its mistakes unless they are pointed out to it directly and it cannot digest them itself. It is impossible to use it to come up with something completely new, such as new theories of physics.

Whether it answers will be highly prone to error depends on what you ask it, whether it's something trivial or something complicated. Asking it for help with the basics of computers carries a low risk of error (provided it's v4/v5).

You have to be very careful which version is started. Different versions of LLM have different window sizes (ask it about its window size and it will tell you). v2-v3.x have 4k tokens, v4 has 16k, v5 has 16-32k tokens. Once the window size is exceeded, it does not remember what was previously written to it during the same session. The more you talk, e.g., for hours, the less it knows what you wrote at the beginning and loses context. And the chance of making critical mistakes increases significantly.

Receiving code and writing code consumes tokens very quickly, so it will soon start writing nonsense. A few hundred lines and you're already outside the window size.

  • Author
On 12/22/2025 at 3:20 AM, Phi for All said:

Like other sites, we're adapting to the sudden surge in AI generated content. We think it can be used as a LLM for those who can't always put their thoughts down as eloquently as they'd like. Many people, however, use it to draw conclusions, search for evidence, and site their sources, all of which we've proven the AI is incapable of doing honestly.

Even this small interaction is much more valuable to me than anything you might do with AI.

I have to admit that I'm heavily biased against AI in general. In the US, preemptive laws are in place that prohibit us from regulating AI companies. I've heard that AI in 2026 will use more water than all the bottled water companies in the US. And it just pisses me off in general since it's obvious the billionaire class hopes AI will eliminate the need to employ actual people. To me, AI is like a fascinating toy that kills people by making them useless.

This is an unavoidable situation. We can't deny the presence of AI companies. It's up to each country's regulations to protect citizens from them taking our jobs. I don't really know how to handle this situation, but I hope my country will do what's best for our people.

16 hours ago, Sensei said:

The problem with LLM is that there are many models, even under the same name, with different computing power and different capabilities.

Let me give you an example from last week.

I launched ChatGPT and asked it to convert a piece of C/C++ code (brute-force calculation) into a mathematical function. It was the following piece of code:

#include <stdio.h>
#include <stdlib.h>

int calc_total( int digits ) {
	int result = 10;
	while( digits-- > 1 ) {
		result *= 10;
	}
	return( result );
}

int extract_digit( int value, int digit ) {
	int result = value;
	//printf( "value %d digit %d ", value, digit );
	while( digit-- > 0 ) {
		result /= 10;
	}
	//printf( "result %d\n", result % 10 );
	return( result % 10 );
}

int calc_checksum( int value, int digits ) {
	int result = 0;
	for( int i = 0; i < digits; i++ ) {
		int digit = extract_digit( value, i );
		if( digit == 0 ) return( -1 );
		result += digit;
	}
	return( result % 10 );
}

int main( int argc, const char *argv[] ) {
	if( argc == 2 ) {
		int digits = atoi( argv[ 1 ] );
		if( digits >= 2 ) {
			int total = calc_total( digits );
			printf( "total possible: %d\n", total );
			int count = 0;
			for( int i = 0; i < total; i++ ) {
				if( calc_checksum( i, digits ) == 0 ) {
					count++;
				}
			}
			printf( "checksum possible: %d\n", count );
		}
	}
	return( 0 );
}

If you (reader, whoever you are) are an expert in mathematics/physics, stop reading right away the text below and try to solve this problem yourself as a mathematical puzzle.

If you find this difficult, imagine an LLM doing this.

The first ChatGPT told me f(n)=10^n/10

I answered, NO! you did not take into account that zero is ignored!

It told me: my mistake, first zero is ignored, and gave yet another function. With just first zero skipped (sigh!)

I answered, NO! all zeroes are ignored, not just first one!

It told me: my mistake, f(n)=9^n/10 is the right answer.

I answered, NO! For n=2, 9^2/10 = 81/10 = 8.1 which is a fraction! How can a fraction be the answer?! For n=2 the correct answer should be 9. For n=3, 72.

It agreed with me, and used... some rounding operator...

I lost patience, I asked: what version are you?

It answered: I am ChatGPT-2.

What? WHAT?!

I have never saw ChatGPT-2. How on earth it is here, when it normally runs ChatGPT v4, v4-mini, and v3.5 was the oldest one...

This time it completely locked up, and the only answer it could get was f(n)=9^n/10

It was impossible to skip it. Deadlock.

A day later, at night, when I was expecting ChatGPT servers will be less busy, I asked what version are you.

It told me: ChatGPT v4 with v5.2 engine. Let's test it.

And it gave the correct mathematical answer for my C/C++ algorithm. Which was:

The correct answer

f(n)=(9^n+9*(-1)^n)/10

Shock, it did it!

ps. And did you manage to do it yourself? I doubt it.

The moral of this story is that you have to be an expert in a given field to detect LLM's mistakes, because its answers are very credible, yet often wrong, and it cannot admit its mistakes unless they are pointed out to it directly and it cannot digest them itself. It is impossible to use it to come up with something completely new, such as new theories of physics.

Whether it answers will be highly prone to error depends on what you ask it, whether it's something trivial or something complicated. Asking it for help with the basics of computers carries a low risk of error (provided it's v4/v5).

You have to be very careful which version is started. Different versions of LLM have different window sizes (ask it about its window size and it will tell you). v2-v3.x have 4k tokens, v4 has 16k, v5 has 16-32k tokens. Once the window size is exceeded, it does not remember what was previously written to it during the same session. The more you talk, e.g., for hours, the less it knows what you wrote at the beginning and loses context. And the chance of making critical mistakes increases significantly.

Receiving code and writing code consumes tokens very quickly, so it will soon start writing nonsense. A few hundred lines and you're already outside the window size.

That's not how I usually work with AI. I know they make frequent mistakes, but that doesn't mean we can't use their data. Ask them to define something and then ask them to analyze other things related to that definition. Do they fit? For example: you ask them to define a pulsar, and then you give them several celestial objects for verification. Can they recognize which one fits the definition? If they successfully execute the task, you're done. Unless they fail, you can start feeding them logical contradictions between their data and yours. It works. If it doesn't, that's your problem, not theirs. I often use logical fallacies and contradictions to counter AI's reasoning by forcing them to generate a premise for each definition they've created. After that, I give them trick questions. This is where logical inconsistency happens. AI can sometimes make mistakes, but not always.

On 12/22/2025 at 5:01 AM, swansont said:

You should also see lots of moderator notes telling them it’s against the rules, if it was used to make content, and lots of such posts in the trash.

Read the rules. 2.13, in particular

Can I just post the summary of my discussion? I don't think this is prohibited too.

47 minutes ago, Lan Todak said:

This is an unavoidable situation. We can't deny the presence of AI companies.

We’re not denying its presence, but we can opt not to use it in certain situations, like where it’s unreliable or unethical.

47 minutes ago, Lan Todak said:

Can I just post the summary of my discussion? I don't think this is prohibited too.

I don’t see how that would comply with the rules. We want the thoughts of people. If we have questions or critiques we want the person whose idea it is to engage with us

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions β†’ Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.