Jump to content

swansont

Moderators
  • Joined

  • Last visited

Everything posted by swansont

  1. Glad you get a laugh out of it, but when you throw around numbers and ratios without some kind of model to say why they should be meaningful, that’s what it is. CMBR is a thermal spectrum, i.e. a continuum, so I’m not sure what significance individual values would have. Especially without a model (based in physics) for them to be based on
  2. Why view time as a comparison with energy and mass? Time is coupled with space, so you might account for time in the same way you account for length. We see that we need three spatial and one temporal dimension to describe a lot of the behavior we observe. Time does not transform into energy. Neither does length.
  3. Whose post? You didn’t quote anybody. Who are Americans. So they gained "access to sensitive data of Americans" DOGE is neither. Security does not necessarily mean computer security. One of the elements of safeguarding systems is physical security. But giving access credentials to people that shouldn’t have it is a breach of computer security. DOGE is neither a department nor an agency. That’s part of the problem. Which is a separate issue that assumes that Musk et al. are operating in good faith. Indeed. HR departments already exist. If it’s just like HR, then no new group is needed. It also means legitimate functions will be sabotaged, which has already happened. Who decides what is foolishness and/or fraud? 100% of what? Tell that to the ~150 countries that have deficits https://countryeconomy.com/deficit All what crap? Profits? Countries aren’t businesses. And a dip in tax receipts is one reason why you might run a deficit
  4. What happens in 2 years’ time will have a large impact
  5. ! Moderator Note This is a discussion forum, not a blog site. Is there something you wish to discuss here?
  6. ! Moderator Note This is a discussion forum, not a blog site. Is there something you wish to discuss here?
  7. You originally posted this in religion. Is this a discussion of religion, or science?
  8. Trump can’t be prosecuted for illegal things done as part of official actions. That doesn’t extend to Elon, and the statute of limitations for some of these crimes will be longer than four years. Of course, Trump will probably pardon him, and others but that’s not foolproof, since state laws could be involved, and shoddy lawyering could leave loopholes. Trump’s immunity also doesn’t extend to international law. (Consider a scenario where the next president orders his delivery to the world court. Doesn’t matter if it’s illegal. Unlikely, but still possible) But Trump is pissing a lot of people off, including Republicans*, and that number is going to get bigger as the impact of these actions spreads. Not all of them are going to delude themselves into thinking everything is fine. Cutting funding will cause layoffs, and we’ll echo the economic trajectory of five years ago, without having a disaster/alleged hoax to blame it on. Also won’t be at the end of the presidency, so no good times to color the memory. *farmers whose water he wasted in California and others who used to sell to USAID, big pharma who use research from NIH, distillers and brewers who can’t sell to Canada anymore, just in a few weeks. Anyone who benefits from medical research (e.g. cancer). Who’s next?
  9. So it’s pretty much irrelevant to what I was discussing. The LLMs we have access to are crap. Sorry if it wasn’t clear that this was what I was discussing. It doesn’t matter a whole lot if there are really good ones that we don’t have access to. That wasn’t the issue. You said, “Do you think AI can’t teach?“ and were given objections to the implication that it can. When there’s an AI available that can do the things listed, you can make a case for it being able to teach.
  10. No, those are not my statements, those are yours. My statement was “true statements are true” which is a tautology. You asked for proof, and I gave it. You can’t rebut it by considering some other statement. As for your statement, give an example of truth contradicting itself.
  11. Where are these perfect LLMs? Not ChatGPT, not whatever Google is using for its summaries. Not Apple. And whatever performance this mystery LLM has, (why isn’t it being adopted everywhere?) it doesn’t erase the bad performance of what’s widely used, because what I’ve seen is crap, and I won’t trust them until they’ve demonstrated they aren’t. As I said.
  12. A tautology is a statement that is true by definition (in rhetoric, it says the same thing twice). “True statements are true” seems to fit that. By inspection. If you want a syllogism A tautology is a statement that is true by definition ‘True statements are true’ is true by definition Therefore it is a tautology Do we need this proof? I wouldn’t think so, but YMMV Can you refute it?
  13. Just the AI part was moved there, because it’s not allowed in mainstream discussions, owing to these veracity issues.
  14. Given the demonstrated performance thus far, LLMs are crap until proven otherwise, IMO. It’s being presented as a solution now, not that it might become a viable solution some day. Until it passes a Turing test, I don’t like deeming it AI anyway. To me, Faux Intelligence is more apt. Some of the examples above are machine learning, which is indeed a different beast (or set of beasts) than LLMs, and I agree it probably would be best to specify the implementation being referenced, much like we specify biology, geology, chemistry, astronomy or physics instead of just saying “science” since there are distinct differences between how they are conducted. Agree. Computers do certain things more quickly than humans, and that’s the advantage being exploited for e.g. pattern recognition using ML or in sorting through piles of data to attempt to summarize something.
  15. ! Moderator Note Not even a theory, but if it’s outdated, is there a point in discussing it? There isn’t enough here to support keeping it open in speculations
  16. If the jokes persist for longer than four hours, consult a physician
  17. Which is not teaching, nor LLMs, AFAIK. Again, this is not teaching. Still not teaching. There are different forms of AI. When you say “It can summarize the notes and provide reading” you are referring to LLMs which is not necessarily the same set of algorithms as a program that has some other task. And being good at one task is not a valid argument that it will be good at a different task (just like not all humans would make good teachers) Why not just use Khan Academy then? Why throw a layer of crap into the mix? Self-study comes from some source material. Why not just use that? These two things are contradictory. If it can’t do the job, it’s not good enough.
  18. You posted this in classical physics. If it were in speculations, I’d ask you for some evidence or a model. Absent that, it’s a WAG and we don’t j\have a WAG forum. If one thing pushes on another, that other thing pushes back. Momentum changes. You’d have things slowing down because of spacetime pushing on it. That throws Newton’s first law out the window.
  19. How would that work? Space-time is geometry And pushing has implications for e.g. momentum which we don’t observe.
  20. It can do a really poor job of summarizing. I don’t see how that means it can teach. Can it explain concepts? Can it figure out why an explanation doesn’t work for some students, figure out what the misconception is, and come up with alternate explanations? Give examples, because I think you’re overestimating the capabilities of LLMs. Can it answer questions that aren’t part of its “training”? Google tried to excuse the poor performance of its AI on novel questions. Can it figure out a poorly-phrased question, which you will get from students who don’t understand enough to explain what they don’t know.
  21. Concentrating the rays can work if the lens or mirror array has a larger area than the collector. Mirrors are used in thermal solar, but lensing has the problem of what happens with off-axis rays - unless the system tracks the sun, you might miss the collector.
  22. ! Moderator Note From Rule 2.13 ”Since LLMs do not generally check for veracity, AI content can only be discussed in Speculations. It can’t be used to support an argument in discussions.”
  23. I’ve seen several treatments of this framed as if cutting corners was a brand-new phenomenon, but yes, the corners are now bigger and easier to cut.
  24. ! Moderator Note This all seems true but we’re a discussion site. What do you want to discuss?
  25. ! Moderator Note Merged and locked because you posted this before and didn’t learn a damn thing from the feedback, such as tagging Musk with dishonesty when he’s not the one making the claims.

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.