Everything posted by iNow
-
Thalamic Nuclei Oserved Driving Conscious Perception
I’m convinced it plays a key, critical, core, important, one might even say central role. Same, and decades ago one of my neuroscience professors always said something which stuck with me: All roads lead through thalamus.
-
Thalamic Nuclei Oserved Driving Conscious Perception
I read it as core role. Important role. Key role. Critical role. Central role isn’t vastly different unless we’re actively looking hard for criticisms to levy… especially since the thread title uses the term drive. “The mother played a central role in driving the family in their minivan.” “Nuh uh! She sits up front, on the left side actually, not in the center!” 🙄
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
I take your point, but AI has been used to discover things like that already for years in the field of medicine.
-
Thalamic Nuclei Oserved Driving Conscious Perception
While I’m largely convinced of the conclusion, I wouldn’t lean too heavily on this particular point. Because the test group involved people with specific brain implants it’s IMO unreasonable to expect a huge sample, but the N sample size noted is pretty small overall and we should thus limit our desire to generalize.
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
Even those of us who have access to funds don’t generally want to spend $250/month to use models like Gemini 2.5 Deep Think (which is being released to select mathematicians and researchers) TechCrunchGoogle rolls out Gemini Deep Think AI, a reasoning model...Google released its first publicly available "multi-agent" AI system, which uses more computational resources, but produces better answers.
-
“Vibe physics” aka why we won’t tolerate AI use
Apologies for taking away from the original intent of the thread and heading off topic. I should’ve kept with my “this conversation is no longer interesting” mindset and left it alone.
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
Think of a pickup truck getting stuck in the mud, maybe in a field on a rural farm somewhere after an overnight rain. Pulling or pushing the truck by hand won’t get you very far, so you amplify your strength using a tool. Here, maybe our tool is a hand winch or a come-along. We get the chains and wrap them around a century old oak tree 20 yards away then hook the winch to our truck which is still just sitting there stuck in the mud. We start ratcheting it click by click by click for about a hour until after several breaks wiping away sweat and recovering from exhaustion we finally get the truck pulled free and extracted from the mud. The winch here maybe represents classic Google. Perhaps instead, though, we get the big John Deere tractor we happen to have sitting back in the barn. You know… The one we use to seed a thousand acres of corn every season and to till the field before planting soybeans. The tractor is big and beefy and pulls the truck out of the mud far simpler than our old hand winch did or could. That tractor here might represent some of the earlier AI models and GPTs that have seized the imaginations of so many of our brethren. Now, imagine instead for a moment that your neighbors are also farmers and they too have big tractors and heavy duty chains. They’re different brands of tractors and have different accessories and modifications made to them for achieving different tasks, but they’re all essentially tractors. You txt or call 7 of those farmer neighbors and ask for their help getting your truck out of the mud. Being good salt of the earth farmers they of course agree to lend a helping hand and they all come over with their 7 different modded and customized tractors and help pull your truck from the mud. That group of neighbor farmers working alongside you in AI terms might reasonably be called a Mixture of Experts model (whatever reason means, apropos to earlier in this thread, but I digress…). Same basic query. Same problem to solve. Same general approach of using models as tools to help solve it, but this time you invited more capable and more qualified participants to the party to help solve it all together… sort of like Oppenheimer did on the Manhattan Project. That’s an MoE model. MoEs are a great way to “reason” through more complex issues and questions, and here’s the kicker… We (humans) have now already reached a point where a SINGLE frontier model is so capable and so powerful all by itself that it no longer needs to request help from those 7 farmer friends in order to achieve the same overwhelming output or performance. And they’re only going to keep getting stronger and more capable in a Moore’s Law type fashion. Happy weekend, fellas. 🦾
-
Thalamic Nuclei Oserved Driving Conscious Perception
That’s a great write up, and agree absolutely overlaps with the thoughts you’ve been exploring here more recently than in years past. I’m going to take as given that thalamus seems highly likely to be the initiation point for consciousness. My thoughts then lead to… How far down on the evolutionary tree of life can we go before finding organisms that no longer have basic thalamic like structures and before finding something they can no longer be considered in some remedial basic way to be moderately mildly conscious themselves?
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
This process is commonly known as mixture of experts. One query calls several different models and the answer coming back is the best from each. “Best” is of course relative, but the outputs from MoE approaches tend to be significantly better than single model queries (at least they were until recent models got so massive and capable with their training corpus kept growing into the billions upon billions of parameters). Depends on the human in my experience.
-
Good symbolic math AI (split from “Vibe physics” aka why we won’t tolerate AI use)
SymPy and SageMath are probably worth trying as top rated open source options
-
“Vibe physics” aka why we won’t tolerate AI use
Yes. They’re getting more capable every day, especially when evaluated specifically across the top math and reasoning benchmarks. Even the Chinese models are crushing it:
-
“Vibe physics” aka why we won’t tolerate AI use
I'm rapidly losing my interest in continuing this conversation, but one more potential example of "reasoning" (depending on how one defines it) just came from Microsoft. They've released an autonomous agent under their Project Ire. Paraphrased summary from the articles which hit my feed in the last 24 hours: Their tool automates an extremely difficult task around malware classification and does so by fully reverse engineering software files without any clues about their origin or purpose. It uses decompilers and other tools, reviews their output, and determines whether the software is malicious or benign. They've published a 98% precision in this task. The system’s architecture allows for reasoning at multiple levels, from low-level binary analysis to control flow reconstruction and high-level interpretation of code behavior. The AI must make judgment calls without definitive validation beyond expert review. Maybe that's not reasoning, though? I guess it depends on ones definition. Cheers.
-
“Vibe physics” aka why we won’t tolerate AI use
So the definition of reasoning is using a pencil to do long division on graph paper? I think that may not cover the term in all its glory, but you're welcome to define it however you want. :)
-
“Vibe physics” aka why we won’t tolerate AI use
Here we do disagree. How are you defining reasoning?
-
“Vibe physics” aka why we won’t tolerate AI use
Fair. I used rhetorical flair instead of peer reviewed precision. To correct myself from earlier, the core idea is this: There are multiple frontier models that anyone following this space uses daily. There are experimental models and teams training models that are available but slightly more difficult for the layperson to access. There are then the people like we see here on SFN who are slow to catch up (late adopters vs early adopters) who continue using some very old versions of some very deeply flawed and outdated models bc they're slightly easier to access (or, more likely, due to behavioral friction and they simply go with what they know). Are we really as far apart on this as it feels? No worries if we are, but I don't feel I'm being in any way extreme or unreasonable with my points. YMMV Again... it's unclear to me why you think I disagree with this
-
“Vibe physics” aka why we won’t tolerate AI use
I’m unclear with which part you’re disagreeing. Sometimes answers are wrong and I’ve repeatedly acknowledged that, along with supplemental detail outlining some of the most common reasons for that. There’s tons of overlap in their Venn diagrams, but they are distinct. LLMs are great at processing and generating natural human language, but tend to suffer when engaged for problem solving. Reasoning models, however, explicitly focus on logical deduction and step by step problem solving. Some might argue that reasoning models are just a specialized type of LLM, but I see it as a similar distinction as we see in biology when trying to differentiate species. The lines between any two are subjective and arbitrary. Of note... OpenAI, for example, announced 2 weeks ago that their new experimental reasoning LLM solved 5 of 6 problems (scoring 35/42) on the International Math Olympiad (IMO). This gave them gold medal status and the test was done under the same rules as humans (4.5 hours, no tools or internet) producing natural language proofs. https://x.com/alexwei_/status/1946477742855532918 Right on their heels, Google announced that an advanced version of their Gemini Deep Think also achieved an equivalent gold-medal score of 35/42 on the same International Math Olympiad test. https://deepmind.google/discover/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/
-
“Vibe physics” aka why we won’t tolerate AI use
It may be helpful to realize that LLMs are just one type of model. They have largely evolved to reasoning models. You’ll notice this more easily when ChatGPT-5 releases in the next few weeks, but several models like Grok4 and others are already displaying those properties. At the end, the answer is only as good as the question. Prompt engineering is becoming far less relevant now that then models are getting so much better, but it’s still a useful art to practice.
-
“Vibe physics” aka why we won’t tolerate AI use
I support recognizing the limitations of models and recognizing where they're likely to go wrong, but encourage caution here over generalizing these results. Yes, it's correct that a "language" model struggles more with math and physics. No doubt there, but we're no longer really using language models and are rapidly moving into reasoning models and mixture of experts applications where multiple models get queried at once to refine the answer. I dug into the Arvix paper and found two concerns with their methods in this study that give me pause. One is they trained it themselves. Who knows how good they are at properly training and tuning a model. That is very much an art where some people are more skilled than others. Two is that they trained models that are not SOTA and are relatively low ranking in terms of capability and performance. It's a cool paper that reinforces some of our preconceptions, but they're basically saying the Model T is a bad car because the Air Conditioning system which was built by a poet doesn't cool a steak down to 38 degrees in 15 minutes. ... or something like that. Know the models limitations, sure. Know that math and physics don't lend themselves to quality answers based on predictive text alone. But also know those problems were largely solved months ago and only get better every day. The models we're using today are, in fact, the worst they will ever be at these tasks since they get better by the minute. /AIfanboi
-
The Official JOKES SECTION :)
I was tired when I read this and my first reaction was, how do I secure a card at the MILF Library?
-
A number of people say Trump is not listening to the courts?
Maybe, and then republicans put 19 justices on the SCOTUS when they retake power and our banana republic continues its slide.
-
A number of people say Trump is not listening to the courts?
Technically parties don’t really exist yet when the founders wrote our governing docs and Washington warned against them
-
A number of people say Trump is not listening to the courts?
Like someone from the justice department?
-
A number of people say Trump is not listening to the courts?
Let’s say the court rules against the president. Does the court have its own police to enforce their ruling?
-
OT from An appeal to help advance the research on gut microbiome/fecal microbiota transplantation in the US.
Michael - Why do you keep neg repping me? My point was valid. The internet is full of people being insincere and pretending to be something they’re not. Creation of fake accounts to bolster a position is very common. Is that happening here? I guess not, but that doesn’t mean I’m wrong nor deserving of repeated downvotes.
-
An appeal to help advance the research on gut microbiome/fecal microbiota transplantation in the US.
Thanks. My point remains valid. Appreciate the neg rep tho