Skip to content

Chat with WILL-AI. Invitation to participate in the field test of custom AI as science communication tool.

Featured Replies

Hi everyone!


Screenshot 2026-04-15 173723.png

I'd like to invite you to stress-test my custom AI

link removed

It is specifically trained on the WILL Relational Geometry open research
publications https://doi.org/10.5281/zenodo.19521296.

This is a field test of the AI's epistemological awareness. I want this AI to be intellectually honest and not biased toward any specific physical model or philosophy - including the one it’s trained on.

The crucial test points are:

  • Ability to acknowledge its own limitations.

  • Ability to admit it is wrong when unambiguous mathematical/physical evidence is presented.

  • Staying strictly true to the source database without hallucinating.

  • Correct formatting and contextual use of external resources (links to Desmos projects, Colab notebooks, and specific sections of the source PDF's).

  • Ability to communicate the source ideas at all levels of mathematical engagement.

  • Long context window handling.

Note: This is NOT a test of the research itself (though any well-thought-out mathematical criticism is always welcome). This is a test of the LLM as a science communication tool. Think of it like an advance search engine.

A quick disclaimer on the research:

The fact that I'm using a custom AI on my website does NOT mean the physics research was written by AI. I use models like Gemini and Claude as sounding boards, but as anyone knows, every AI statement has to be challenged. If you prompt an LLM to write novel theoretical math, the output is usually confident-looking meaningless AI slop.
The actual theoretical development is entirely human.

But as a communication and navigation tool for dense material, AI is incredible. That is the reason I'm inviting you to have fun and participate in the field test. We are living in exciting times!

Have fun poking at it, and please share your thoughts, and experiences below!

WILL-AI: link removed

P.S. If any errors appear just switch model from Gemini to Qwen.

Moderator Note

Rule 2.7 states, in part,

“We don't mind if you put a link to your noncommercial site (e.g. a blog) in your signature and/or profile, but don't go around making threads to advertise it.”

IOW, don’t have discussion that requires people to go to your site in order to participate. You’ve been warned about this before.

  • Author
1 hour ago, swansont said:

Moderator Note

Rule 2.7 states, in part,

“We don't mind if you put a link to your noncommercial site (e.g. a blog) in your signature and/or profile, but don't go around making threads to advertise it.”

IOW, don’t have discussion that requires people to go to your site in order to participate. You’ve been warned about this before.

Advertise what exactly? I’m not selling anything. This is a field test exploring the role of AI in science communication, which has been discussed on the forum few times. I’ve set everything up to gather empirical data and, at the same time, provide interactive content for anyone who wants to participate. Given that forum activity has been declining, is it really reasonable to delete my links?

I genuinely think interactive content could help revive engagement - if it’s not removed before it even gets started.

6 hours ago, Anton Rize said:

Advertise what exactly? I’m not selling anything.

Linking to your personal site in posts is against the rules. I don’t think this is difficult to comprehend.

8 hours ago, Anton Rize said:

Given that forum activity has been declining, is it really reasonable to delete my links?

Are you arguing that less rigor on our part would bring more participation? It probably would, but is quantity preferable to quality?

I'd hate to see more people who really need to study science using AI to pretend they have a shortcut. It already breaks the heart to see so many working hard to dismiss human intelligence and scientific methodology.

  • Author
26 minutes ago, Phi for All said:

Are you arguing that less rigor on our part would bring more participation?

You making unjustified assumption. Not a good habit to have for a physicist. Ironically your comment on "less rigor" needs some rigor.

31 minutes ago, Phi for All said:

I'd hate to see more people who really need to study science using AI to pretend they have a shortcut.

This reminds me of Socrates. He famously argued against the written word warning that writing would not make people wiser, but rather foster a "conceit of wisdom" without true understanding.

AI is a tool. Extremely useful tool in the right hands. But in fools hands any tool creates havoc. You shouldn't blame AI for human stupidity.

32 minutes ago, Anton Rize said:

You making unjustified assumption. Not a good habit to have for a physicist. Ironically your comment on "less rigor" needs some rigor.

This reminds me of Socrates. He famously argued against the written word warning that writing would not make people wiser, but rather foster a "conceit of wisdom" without true understanding.

AI is a tool. Extremely useful tool in the right hands. But in fools hands any tool creates havoc. You shouldn't blame AI for human stupidity.

Not the same, criticising prosthetics for your tongue and lips (writing) than prosthetics for your brain (AI).

I've seen AI work like a dream in fields where there is general consensus on principles, definitions, limits of applicability, etc (example: financial maths, where everything consists of arbitrary definitions), and fail miserably where there are important divergences of opinion/interpretation, bounds unclear, etc (example: cosmology, ultimate laws of physics, like, eg, the local gauge principle, or realism vs determinism, etc). It's very clear to me that in those areas AI tends to either shamefully hedging the bet or plain getting it wrong (missing the necessary nuances altogether).

OTOH, whereas maths leans heavily on rigour, physics does not. Physics leans heavily on experimental fitness rather. Mild lack of rigour (or serious, as the case might be) is forgiven if experiments are well accounted for within allowed ranges of parameters and variables.

Sorry. This is not my debate, but I had to jump on this, because I saw a logical flaw about an argument that has occupied my mind lately.

  • Author
33 minutes ago, joigus said:

I've seen AI work like a dream in fields where there is general consensus on principles, definitions, limits of applicability, etc (example: financial maths, where everything consists of arbitrary definitions), and fail miserably where there are important divergences of opinion/interpretation, bounds unclear, etc (example: cosmology, ultimate laws of physics, like, eg, the local gauge principle, or realism vs determinism, etc). It's very clear to me that in those areas AI tends to either shamefully hedging the bet or plain getting it wrong (missing the necessary nuances altogether).

This is exactly what this field test is for "This is a field test of the AI's epistemological awareness.".
So you raise a valid point and your claim is when the bounds unclear we should expect "AI tends to either shamefully hedging the bet or plain getting it wrong (missing the necessary nuances altogether).". That's a reasonable hypothesis.
You see I put some effort in customising this AI and it seems that performance in this type of areas (philosophy of physics, ontology etc...) is improved.
I say lets test it! link removed

15 hours ago, Anton Rize said:

You making unjustified assumption. Not a good habit to have for a physicist. Ironically your comment on "less rigor" needs some rigor.

I asked a question. This seems like an attempt NOT to answer it.

  • Author
58 minutes ago, Phi for All said:

I asked a question. This seems like an attempt NOT to answer it.

I do not understand your question. How do you connect amount of rigor to me or to this post? Your question sounds rhetorical to me. If you want to get a clear answer try starting from a clear question.

17 hours ago, Anton Rize said:

You making unjustified assumption.

Ironically your comment on "less rigor" needs some rigor.

Don’t think so. Allowing links means less rigor in the standards about what we allow. Doesn’t seem that complicated

Both Phi's questions seemed pretty clear to me and probably to others as well.

Edited by studiot

On 4/15/2026 at 9:57 AM, Anton Rize said:

Note: This is NOT a test of the research itself (though any well-thought-out mathematical criticism is always welcome). This is a test of the LLM as a science communication tool. Think of it like an advance search engine.

The chosen corpus appears to be the author’s own theory. If the goal is to evaluate the AI as a science communication tool, why not test it on established, broadly accepted science where independent ground truth exists?

45 minutes ago, Ghideon said:

The chosen corpus appears to be the author’s own theory. If the goal is to evaluate the AI as a science communication tool, why not test it on established, broadly accepted science where independent ground truth exists?

That would seem the obvious path, otherwise there is no verifiable reference to test its accuracy with. AI slop being a known factor.

Create an account or sign in to comment

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.