Jump to content

All Activity

This stream auto-updates

  1. Past hour
  2. This is not at all clear to me (and I’m thinking you meant latitude, since we don’t have time zones based on altitude) If the sun is overhead at one location over a flat surface, it would not be overhead at some other location some distance away. This is the reason we have time zones. Why wouldn’t the sun would rise and set on a flat earth?
  3. But the underlying issue is the datasets. The algorithm can’t discern veracity of information; it relies on what it’s fed, and those choices are made by humans. The “AI” isn’t intelligent. It’s not thinking. It’s just a fancy search engine.
  4. I'm sure that's what confused Hitler, when he read Nietzche... A biased mindset, as you've admitted. Science and philosophy are two sides of the same coin, philosophy identifies the bias and science drives around the chicane. Now!!! will you please, start a new topic on firmer ground, bc this race has run its course...
  5. Do you sleep with your rope?
  6. 1- your observation is correct. While stating that I wanted to reduce bias, especially materialist bias, I indicated that removing all forms of subjectivity was the way forward in dealing with the bias issue. This was an incorrect statement on my part. I should have said “controlling” this form of subjective thinking instead of “eliminating” all forms of subjective thinking. I also incorrectly stated that philosophy should play a limited role In science, but I am also backing away from this statement as this may very well be the discipline that helps us navigate through subjective thinking with rigour and rationality 2-agree also that it is mostly generalities, but nonetheless, it actually reflects my mindset on the subject matter. 3- If we find it important to study the subjective nature of reality, then science through philosophy will have to find a way of doing, and an improved way of doing so than the tools in use in other sciences
  7. I think perhaps the best use for this tool is in understanding the languages of other species, and perhaps our best chance of meaningful dialogue if the alien's do get in touch. People trying to look smarter than they are, always trip themselves up, bc it's only a tool if they know how to use it. Both will evolve... 😉
  8. Today
  9. Most of the current LLMs are not researching anything they write.
  10. So there could never be a random event and this would apply to all thought processes too? Is it possible to apply logic to that hypothesis? Could logic also be defined by randomness? Don't they say you can prove nothing by logic but that you can (according to its rules) disprove a hypothesis?
  11. On a flat Earth, there would be no day and night cycles and no time zones with different time according to altitude and longitude, nor would there be different seasons.. There would be no polar night and polar day, and no aurora borealis. The poles only make sense on spherical objects anyway. On a flat Earth, distance from lat1,long1 to lat2,long1 would be the same as from lat1,long2 to lat2,long2 and making a perfect rectangle or eventually square.
  12. Yes, as @KJW says, it is obvious you can't do this on land, because, durrh, the ground is bumpy! That's the sort of typically stupid answer you can get from ChatGPT, if you don't apply your own critical faculties. Even on water it will be hard to do, due to waves, currents, the effect of gusts of wind on whatever floating objects you use, etc. But the longer the distances you choose, the clearer the result will be. You may note that most of the suggestions people have made, including my own about Dover and France, rely on much larger distances than 1km, to make the effect more obvious. But the whole flat Earth thing is unbelievably silly. Sailors in the ancient world were aware the Earth was not flat. Eratosthenes (the Greeks were a seafaring nation) measured its circumference - and got it more or less right - around 200BC, for God's sake!
  13. With ChatGPT v3.5, you can ask the same question in two different ways or in two different languages and get completely different answers.. The result is completely unreliable and even dangerous if one is not aware of how it works, and believes everything without any doubt (like the typical people using it)..
  14. Actually, references should ALWAYS be quoted. It's a no different requirement for LLM's or a theoretical proper AI than a human researcher. Evidence, evidence, evidence.
  15. Yes, I think they either programmed GPT for sheer speed or don't have the training database set up in such a way thay it can find actual references. You can't really store links and similar data easily using default token system. Beyond a few pieces they're all too random. Did see where it Rickrolled one guy though, so who knows.
  16. There’s free AIs out there that do this, for example Perplexity.
  17. It kind of goes beyond a mere interpretation though - SD implies that there’s no measurement independence, ie the experimenter isn’t actually free to choose his setup as he wishes. There will always be a prior correlation, no matter how you set up your experiment.
  18. Yeah, honestly getting the actual source and not a probabilistic hallucination would not be that much additional code/memory. It's like the math issue. It's not hard for a computer to do math correctly, but someone still needs to be arsed to program in the ability.
  19. Assuming that is iron and it is insoluble, it is going to be some kind of iron oxide or hydroxide. Most methods to quantify them (which I know) are not really suitable for DIY testing. You could open up your (used) filter and/or get a sample before to the filter (if you have a bypass valve) to check whether you have got visible turbidity (use a clear glass and white background to check for discoloration). Typically, municipalities also provide water testing (for a fee). A way to deal with that (other than replacing lines) is to use a backwash filter, I believe.
  20. The Brave browser version puts the references at the bottom, like Wiki.
  21. Which model one uses and how they prompt it are extremely relevant. Not all LLMs are GPTs nor are all GPTs at the same version nor trained on the same dataset(s). So are approximately half the voting populace.
  22. Our water is not well water but comes from Pasco County, Florida municipal water supply, which is alleged to meet EPA requirements of 0.3 mg/L, which in Florida is same as 300 mcg/L. Like the quote, one of my favorites: “Man’s most valuable trait is a judicious sense of what not to believe.” Euripides I have hundreds more but I’ll spare you.
  23. Interesting use of “only” The “past few centuries” encompasses post-Newtonian physics, cosmology, a fair amount of geology, most of chemistry and all of modern biology IOW, the bulk of science. The author takes the same approach as you; “mind” as a proxy for all of science, and no concrete examples of how these alternate approaches would lead to success or how this “proactive role for human consciousness” would have any impact on any other fields of study. Since this is just a repetition, it does nothing to illuminate the issue or answer any questions.
  24. Yesterday
  25. Yet the article seems to address entirely different issues that you have brought forth so far. For starters, you have argued against bias in science, yet this article suggest subjectivity, i.e. a major source of bias, needs to be included. The reasoning is that it is an integral part that allows It is mostly a philosophical treatise and there is unfortunately not a lot on the practicalities in how it can or should be implemented. Also, it deals with a high-level idea on information and from what I see tries to include thoughts that are closer to social science methodologies. Unfortunately, it does not seem that this approach has been demonstrated to provide good applications in natural sciences (perhaps aside from more abstract areas such as information theory?).
  26. Well, we've all seen on this forum ample evidence of that. This claim not only seems true, but also both very funny, and a timely puncturing of the bubble of hype surrounding these verbose and fundamentally unintellligent programs. I realise that AI encompasses a far wider scope than LLMs but, as they stand today, LLMs look to me pretty meretricious. It may be that their chief legitimate use is in collating references for the user to determine, for himself, which one are good and which ones are not, i.e. just a superior kind of search engine.
  1. Load more activity
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.