swansont
Moderators
-
Joined
-
Last visited
-
Currently
Viewing Topic: The False Flag of Freedom
Everything posted by swansont
-
3i/Atlas and weak deceleration ?
That’s the citation I gave earlier. The measured anomaly is, as you say, around 10^-9. Your prediction is much larger than that. You seem to be asserting that outgassing or radiation effects don’t account for this measured value, and I’m asking for details. Yes, that one, but it’s a measurement and not speculation about any ET-related subject matter
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
Are you really this obtuse, or do you just play a simpleton on TV? Queensberry Rules of logic and rhetoric. Please provide a link to these.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
Not aware of these. Perhaps you could favor us with a link? I know there are Queensberry Rules governing boxing https://en.m.wikipedia.org/wiki/Marquess_of_Queensberry_Rules and am familiar with the protocols surrounding science. But I’m not the one who has to make the adjustment here.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
We’ve been demanding rigor and objectivity, which are required if one is to accept and validate an idea in science. You don’t get an exception to the requirements.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
This is indistinguishable from a response about belief in a supreme being.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
How would you objectively test for “stochastic inevitability”?
-
3i/Atlas and weak deceleration ?
Your “answer as above” lacks a citation.
-
3i/Atlas and weak deceleration ?
Which you have not stated until now, and you have not provided any links to credible sources reporting it And as I said, Atlas has a tail, so we know matter is being ejected — a non-gravitational acceleration is expected. If you want discussion, you have to provide the details, rather than expecting others to go dig for them. When you make a claim, you need to back it up. Here’s a report of an acceleration no bigger than 3 x 10^-10 au/d^2 (if my math is correct, that’s about 10^-8 m/s^2) https://lweb.cfa.harvard.edu/~loeb/CLV.pdf So is there any evidence that the usual suspects (outgassing, radiation pressure, Yarkovsky effect) don’t account for it?
-
UTEM — Unified Theory of Matter Evolution
But dS = dQ/T, so if heat flow in is greater than the flow out, dQ is positive, and so is dS. Entropy increases.
-
UTEM — Unified Theory of Matter Evolution
Please some worked examples of it. And a properly formatted equation.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
It means they (typically) will get moved there, if they aren’t started there. ETA: It’s not what I mean. It was a staff consensus to add this rule.
-
3i/Atlas and weak deceleration ?
Your responses almost count as cryptic messages. It’s like overhearing one half of a phone conversation Desperate? albedo would be important for any model that wants to estimate heating or radiation pressure. Ruled out? Who did that? Nothing was visible, but not all gases are visible. But this thread is about Atlas, which is definitely outgassing - there’s a tail. www.noirlab.eduGemini South Captures Growing Tail of Interstellar Comet...Astronomers and students working together through a unique educational initiative have obtained a striking new image of the growing tail of interstellar Comet 3I/ATLAS. The observations reveal a pr.... Of which, I will point out AGAIN, you have not shared the details. I can’t help but notice that these are not calculations. They shouldn’t be that involved. (I’ve noted a strong correlation between people who resist sharing such work with people who “don’t do math”). People who are to be taken seriously are eager to share and discuss, rather than just lecture. The latter, BTW, is contrary to the rules. If that’s what you want, go start a blog. This is a discussion forum.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
If you ask for clarification, I will try to clarify. But I’ve tried to be clear: most threads about AI go into speculation, per rule 2.13. You were invited to present your evidence. I don’t see where you did. And skeptical - yes, that comes with the territory. This being a science discussion site. Surely it has become apparent in your time here that this is not a credulous audience
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
I find it hard to fathom the reasoning here. It gets difficult to maintain the assumption of good faith posting with every obtuse comment.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
No, I didn’t. When someone tells you about their own work, it’s not appealing to authority.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
Who is “they”? IBM? The computer conglomerate? That has their own AI model (Granite)?
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
I base my notion on what the computer people say, e.g. https://www.ibm.com/think/topics/large-language-models “LLMs work as giant statistical prediction machines that repeatedly predict the next word in a sequence. They learn patterns in their text and generate language that follows those patterns.” I suspect they employ a lot of coders. It would not surprise me that they hire ethicists to help stave off legal problems, but that’s not the same as their input becoming part of the code. You’re free to present actual evidence, of course.
-
3i/Atlas and weak deceleration ?
It’s not clear what you’re not in agreement with I think there could be other factors. Albedo would be one. No sense? Solids outgas. The question isn’t whether it’s happening, the question is how much? Outgassing would follow the laws of nature. The issue I brought up is that you haven’t shared the details of the laws of nature you’re applying Without details, there’s no way for any of us to know how you arrived at this conclusion. The point of a discussion board is to share these details.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
Yes, go talk to them. (though I am a doctor)
-
3i/Atlas and weak deceleration ?
I don’t see anyone arguing against the laws of nature. It’s just that you haven’t articulated the details of the application of these laws Not here you haven’t (what you’ve stated elsewhere doesn’t apply). You mentioned “proportional reasoning” once, but, again, you haven’t articulated any details.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
If you have a mountain of evidence, that sounds like a science discussion. If what you have is subjective observation, interpretation, opinion or anecdotes, then you don’t have evidence.
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
Apparently. I don’t often participate, but I see the titles and post summaries, and when I see “experiment” associated with LLM, I’m going to scan it to see why it’s in philosophy (some people try to sneak posts in that should be eksewhere), or if rule 2.13 applies. In this case, both triggers came into play. You’re offering a philosophical solution to something that’s not a philosophical question; you pretty much ignore any issues about how a LLM works. You might as well have asked about objects of different masses falling at the same speed. A philosophical treatment is a non-starter.
-
In Case You Missed it ?
Not recalling exactly what I googled, but it had to do with leaving shoes behind, and it led me here
-
An Experimental Report: Verifiable Sensory Curation and Subjective Awareness in a Large Language Model
Where is the philosophy in this? It seems like a not-very-rigorous investigation of a LLM that ignores the obvious (that a LLM is programmed to give plausible-sounding answers) Per the rules, this belongs in speculations
-
In Case You Missed it ?
I’ve seen this suggestion. It was called “rapture trolling”