Everything posted by iNow
-
Thalamic Nuclei Oserved Driving Conscious Perception
I’m far more comfortable with this claim than the one asserting a cortex is prerequisite to consciousness, but I believe it too is mistaken. Jellyfish, for example, also don’t have a CNS but very much seem conscious. Slime molds are another potential example of conscious behavior in the absence of a CNS, though this one is admittedly easier to argue against despite the way they solve mazes and respond “intelligently” to multiple complex stimuli.
-
Thalamic Nuclei Oserved Driving Conscious Perception
Thanks for confirming. According to this view, even an octopus cannot be conscious so I reject it from the start. They were mistaken IMO. People born without it may have various deficiencies of various severities, but they still can function, lead healthy lives, and are very much conscious when they do.
-
Thalamic Nuclei Oserved Driving Conscious Perception
As always, your points are clear, consistent, and coherent and it's hard for me to challenge them given their strength. The one item which stands out to me right now, however (and it's possible I'm misinterpreting), is that you seem to be suggesting cortex is required for conscious experience (as do the authors you cited). I am not ready to accept that conclusion myself.
-
Thalamic Nuclei Oserved Driving Conscious Perception
Believe the final comment from me on the post immediately preceding yours touched on similar theme
-
Thalamic Nuclei Oserved Driving Conscious Perception
It’s a far more reptilian part of our brain. I was surprised too when I first learned it, but it makes intuitive sense. My understanding is sharks can smell a single drop of blood almost half kilometer away. I was with you until you said solitary. Is that maybe the case? Sure, but sense of self strikes me as one of those things where the sum of the parts is greater than the whole.
-
Summoning the Genie of Consciousness from the AI Bottle
I said nothing about our brains being computers. I suggested they are prediction machines generating certain outputs
-
Summoning the Genie of Consciousness from the AI Bottle
Are our organic minds really meaningfully different in this regard?
-
Summoning the Genie of Consciousness from the AI Bottle
This model hasn’t yet been released, yet you claim you have tested it. An already suspect credibility only further erodes the more you post.
-
Thalamic Nuclei Oserved Driving Conscious Perception
I believe the point is that, while thalamus is critical in organizing all incoming stimulus, olfaction involves even more archaic neural structures and doesn’t flow straight through thalamus from the start like inputs from essentially ever other source does.
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
Implicit in this request is a suggestion that AI has not been involved for years in medical and pharmaceutical research, which is laughably absurd. My point was self-evident, but I do like and respect you so maybe these primers are a helpful place for you to start: https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf https://www.weforum.org/stories/2025/03/ai-transforming-global-health/
-
Is it possible to tell who is DVing?
Hubris is excessive pride and self-confidence. How does lacking it prevent learning? Perhaps you meant it’s opposite, humility?
-
Thalamic Nuclei Oserved Driving Conscious Perception
I’m convinced it plays a key, critical, core, important, one might even say central role. Same, and decades ago one of my neuroscience professors always said something which stuck with me: All roads lead through thalamus.
-
Thalamic Nuclei Oserved Driving Conscious Perception
I read it as core role. Important role. Key role. Critical role. Central role isn’t vastly different unless we’re actively looking hard for criticisms to levy… especially since the thread title uses the term drive. “The mother played a central role in driving the family in their minivan.” “Nuh uh! She sits up front, on the left side actually, not in the center!” 🙄
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
I take your point, but AI has been used to discover things like that already for years in the field of medicine.
-
Thalamic Nuclei Oserved Driving Conscious Perception
While I’m largely convinced of the conclusion, I wouldn’t lean too heavily on this particular point. Because the test group involved people with specific brain implants it’s IMO unreasonable to expect a huge sample, but the N sample size noted is pretty small overall and we should thus limit our desire to generalize.
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
Even those of us who have access to funds don’t generally want to spend $250/month to use models like Gemini 2.5 Deep Think (which is being released to select mathematicians and researchers) TechCrunchGoogle rolls out Gemini Deep Think AI, a reasoning model...Google released its first publicly available "multi-agent" AI system, which uses more computational resources, but produces better answers.
-
“Vibe physics” aka why we won’t tolerate AI use
Apologies for taking away from the original intent of the thread and heading off topic. I should’ve kept with my “this conversation is no longer interesting” mindset and left it alone.
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
Think of a pickup truck getting stuck in the mud, maybe in a field on a rural farm somewhere after an overnight rain. Pulling or pushing the truck by hand won’t get you very far, so you amplify your strength using a tool. Here, maybe our tool is a hand winch or a come-along. We get the chains and wrap them around a century old oak tree 20 yards away then hook the winch to our truck which is still just sitting there stuck in the mud. We start ratcheting it click by click by click for about a hour until after several breaks wiping away sweat and recovering from exhaustion we finally get the truck pulled free and extracted from the mud. The winch here maybe represents classic Google. Perhaps instead, though, we get the big John Deere tractor we happen to have sitting back in the barn. You know… The one we use to seed a thousand acres of corn every season and to till the field before planting soybeans. The tractor is big and beefy and pulls the truck out of the mud far simpler than our old hand winch did or could. That tractor here might represent some of the earlier AI models and GPTs that have seized the imaginations of so many of our brethren. Now, imagine instead for a moment that your neighbors are also farmers and they too have big tractors and heavy duty chains. They’re different brands of tractors and have different accessories and modifications made to them for achieving different tasks, but they’re all essentially tractors. You txt or call 7 of those farmer neighbors and ask for their help getting your truck out of the mud. Being good salt of the earth farmers they of course agree to lend a helping hand and they all come over with their 7 different modded and customized tractors and help pull your truck from the mud. That group of neighbor farmers working alongside you in AI terms might reasonably be called a Mixture of Experts model (whatever reason means, apropos to earlier in this thread, but I digress…). Same basic query. Same problem to solve. Same general approach of using models as tools to help solve it, but this time you invited more capable and more qualified participants to the party to help solve it all together… sort of like Oppenheimer did on the Manhattan Project. That’s an MoE model. MoEs are a great way to “reason” through more complex issues and questions, and here’s the kicker… We (humans) have now already reached a point where a SINGLE frontier model is so capable and so powerful all by itself that it no longer needs to request help from those 7 farmer friends in order to achieve the same overwhelming output or performance. And they’re only going to keep getting stronger and more capable in a Moore’s Law type fashion. Happy weekend, fellas. 🦾
-
Thalamic Nuclei Oserved Driving Conscious Perception
That’s a great write up, and agree absolutely overlaps with the thoughts you’ve been exploring here more recently than in years past. I’m going to take as given that thalamus seems highly likely to be the initiation point for consciousness. My thoughts then lead to… How far down on the evolutionary tree of life can we go before finding organisms that no longer have basic thalamic like structures and before finding something they can no longer be considered in some remedial basic way to be moderately mildly conscious themselves?
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
This process is commonly known as mixture of experts. One query calls several different models and the answer coming back is the best from each. “Best” is of course relative, but the outputs from MoE approaches tend to be significantly better than single model queries (at least they were until recent models got so massive and capable with their training corpus kept growing into the billions upon billions of parameters). Depends on the human in my experience.
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
SymPy and SageMath are probably worth trying as top rated open source options
-
“Vibe physics” aka why we won’t tolerate AI use
Yes. They’re getting more capable every day, especially when evaluated specifically across the top math and reasoning benchmarks. Even the Chinese models are crushing it:
-
“Vibe physics” aka why we won’t tolerate AI use
I'm rapidly losing my interest in continuing this conversation, but one more potential example of "reasoning" (depending on how one defines it) just came from Microsoft. They've released an autonomous agent under their Project Ire. Paraphrased summary from the articles which hit my feed in the last 24 hours: Their tool automates an extremely difficult task around malware classification and does so by fully reverse engineering software files without any clues about their origin or purpose. It uses decompilers and other tools, reviews their output, and determines whether the software is malicious or benign. They've published a 98% precision in this task. The system’s architecture allows for reasoning at multiple levels, from low-level binary analysis to control flow reconstruction and high-level interpretation of code behavior. The AI must make judgment calls without definitive validation beyond expert review. Maybe that's not reasoning, though? I guess it depends on ones definition. Cheers.
-
“Vibe physics” aka why we won’t tolerate AI use
So the definition of reasoning is using a pencil to do long division on graph paper? I think that may not cover the term in all its glory, but you're welcome to define it however you want. :)
-
“Vibe physics” aka why we won’t tolerate AI use
Here we do disagree. How are you defining reasoning?