I know there are many of us who are waiting for the LLM hype cycle to blow over so the AI field can finally move forward again. Yes, LLMs are "empty calories" as far as the substance of their mechanisms are concerned.
I see AGI and AI consciousness as two completely separate topics, because the "AI" in "AGI" is a technical denotation with a performative meaning. While conscious machines are impossible, AGI to me as inevitable; It's just a matter of getting all the behaviors in. If behaviors is all people look for, then sooner or later they're just going to get those behaviors. Some people equate behaviors with consciousness, and that's bad.
It also bothers me that basically all the literature out there frames the problem as a problem with understanding minds, instead of understanding what a machine fundamentally is and does (...hello? We're building a machine, right? These things don't build themselves you know...)
p.s. Someone please do something about Anthis and his crackpot "Institute" https://thehill.com/opinion/cybersecurity/3914567-we-need-an-ai-rights-movement/