Jump to content

TheVat

Senior Members
  • Posts

    3080
  • Joined

  • Last visited

  • Days Won

    75

Everything posted by TheVat

  1. Failure to wash a toad? I will be contacting the relevant animal welfare agency in your country. Thanks to @Ken Fabian for pointing out the alkalinity of wet wood ashes. I had not realized the pH could go up to 12. I wish I could edit or delete my misinformation in the earlier post. I was somehow thinking of the ashes as dry and the toad also, not fully registering it went into wet foliage on a wet night.
  2. That's a helpful short essay. He has a strong point that we will be misusing the term explanation when we try to have it encompass experience. As he says, I can in theory offer a complete explanation of how I see the color red, but that cannot include the phenomenal experience I have in so doing. Subjective experience or qualia as the cognitive philosophers call it, are simply a different category from objective functional explanations. His concluding comment is spot on (which to my brain, looks like an arrow hitting a bullseye)... Consciousness as we have been discussing it is a biological process, explained by neurobiological and other cognitive mechanisms, and whose raison d’etre can in principle be accounted for on evolutionary grounds. To be sure, it is still largely mysterious, but (contra Dennett and Churchland) it is no mere illusion (it’s too metabolically expensive, and it clearly does a lot of important cognitive work), and (contra Chalmers, Nagel, etc.) it does not represent a problem of principle for scientific naturalism.
  3. Don't know. Dogs chew a lot of things, sometimes just for something to do, or maybe it's also dental care. My neighbors dog chews chunks of wood, sometimes will eat poop or clods of dirt. Hard to say. If you put out a mineral lick (like ranchers use) and they start licking that, and stop nibbling ash, that might indicate something. Sometimes with horses it's just about salt. Out here in the winter you will sometimes see the bison in the State Park licking road salt. They will gather in groups in the middle of the road and ignore drivers trying to get through.
  4. The ash is harmless. Donkeys and horses eat it when they crave minerals, like potassium. I gather being covered with ash does decrease the odds of a toad being kissed by a beautiful princess.
  5. The whole "what's it like" discussion in philosophy of mind addresses what's called the Knowledge Argument, against physicalism. https://en.wikipedia.org/wiki/Knowledge_argument This summarizes the argument and the famous thought experiment called Mary's Room, developed by Frank Jackson. Dennett, predictably , argues that there's no need for qualia.
  6. I think a snapshot is static, but its meaning is dynamic. So your later analogy might be somewhat valid. We don't really get the dynamics of a mental state from just seeing the connectome map. In your particle analogy, we don't know the momentum of the particles. In the connectome map, we don't have the kind of dynamic picture that would allow us to say When Alice's C4905 transmedial fiber fires, plus (etc), then she is seeing a soft red glow. We don't know rising or falling activation potentials or numerous other dynamic conditions back through time to when baby Alice was figuring out the soft red glow and configuring those transmedial fibers and synaptic sensitivities and so on. Maybe it's like penetrating a lot of noise and we just can't. But who knows? Maybe someday.
  7. I think the stakeholders on complexity emergentism are going with the assumption that type of complexity matters. I would agree a rigid structure of transistors pushed together like Legos is highly unlikely to be the sort of complexity one might find in an animal connectome. We are very distant from understanding the connectome or its idiosyncrasies, as Epstein points out, so I could agree that invoking its complexity is, at this time in history, hand waving.
  8. I think it relates to the idea that two people can have the same thought but each brain will show a different set of pathways for that same thought. Each brain develops connections and signal paths in an idiosyncratic way. (that's the price of plasticity) So the converse of that is that you could have two identical connectomes which would be having entirely different thoughts and memories. A connectome map does not consistently correspond to a particular pattern of thought. Human neural architecture is not built of off-the-shelf standardized components. So you would need my entire developmental and experiential history to know what my current connectome snapshot means. And maybe not even then?
  9. I will be happy to take the cat outside if there is less wind. Maybe it will not matter.
  10. Is this similar to holographic memory theory? But even if not, I think it illustrates how a brain could recreate an experience without retrieving it from any kind of register. It would make reflection a better term than processing. IIRC, Penrose has also speculated that human brains are not algorithmic. I will try to find a link if the chat goes that way.
  11. That may depend on where one wants to land on the question of AI and consciousness. I have found his paper quite thoughtful and it is nudging me to review my notions of the popular analogies between human brains and digital processors as we know them. The Epstein paper he linked also dashed some cold water in my face, especially regarding how little we know about the causal operations of brains. I want to marinate for a few days on that one. That said, I am disappointed when anyone uses terms like "hysterics" against anyone. That's a putdown rooted in misogyny and myths about the psyche, but maybe its roots are being forgotten. Hope we can move past that. Talking of self-diagnosing tires seems a little off the topic, but maybe not. Whatever consciousness refers to, it seems to be something emergent in highly complex and multilayered systems, so that seems like the place to turn the light and try to discern causal efficacy.
  12. Then why not save typing and just use "god"? If you have no belief, then it's a handy term for a deity. Again, choosing a term with specific implications is what people do when they harbor specific ideas. Intelligent designer implies a being with intelligence and engineering skills that are applied. You can't really avoid implications when you use language.
  13. There appears to be more to it, when anyone adds "designer." By adding that specific role, it appears you are positing that a hypothetical deity designed the universe. How does that square with the claim to have no beliefs, and the claim that ultimate reality is unknowable? Just trying to sharpen the thinking on how we label things.
  14. Selection of terms often indicates how one leans in one's beliefs. Intelligent designer has specific implications about what one believes a god would be if there were such a being. Some beliefs don't see a god as an engineer, but rather as some vast mind that just passively exists and watches. Or is just a summation of all conscious life in the universe. Just saying, while you may state you have no beliefs on the matter, you have selected a preferred term that suggests what you would believe.
  15. I've started reading this - one of the most fascinating papers in cognitive sciences I've seen. The anti-representational view of brains interacting with the world certainly deserves consideration. I thank you for sharing that. I was already aware that information processing was an imperfect analogy for what biological brains do, so I'm curious how the author will steer away from it. Don't know yet if I can agree with abandoning that model completely but will try to finish, check some related sources and get back here tomorrow. I will confess I always enjoy watching a paradigm get shaken up, even if it's one I subscribe to. 😀 Sorry to hear that. Your perspective is valuable IMO.
  16. I just mentioned Ted Chiang in the "Artificial Consciousness is Impossible" thread and cannot restrain myself from praising this brilliant sci-fi and speculative fiction writer. His short story, Exhalation , is a bit pertinent to that thread but I would recommend it to anyone who likes the genre. https://en.wikipedia.org/wiki/Exhalation_(short_story) After you're done reading, make sure you maintain proper air pressure!
  17. (my previous post continued) For one thing, we know the fundamental component of a human brain is a neuron. Neurons use symbol systems, mindlessly, through activation thresholds and firing rates and so on. You wrote (in the article in Towards Data Science)(handsome fellow in the author picture): But we also have no experiental connection to the symbols that neurons send each other or the DNA strings that developed them. Those little blobs of jelly are, from my conscious perspective, all syntax and no semantics. They just go click-click-click at each other. They are unsentient electrochemical machines which know nothing of meaning. The meaning lies in the domain of that emergent process I casually call "me" or "Paul" or "my wife's unpaid handyman." In emergent processes, meaning doesn't travel all the way down the various operational levels. Lacking semantics at one functional level does not prevent its emergence at another. You also wrote, To the machine, codes and inputs are nothing more than items and sequences to execute. There’s no meaning to this sequencing or execution activity to the machine... So this machine is akin to a neuron. The neuron mindlessly handles inputs, sequences to execute. But when we put 85 billion of them working together, we get Paul or David. If we have 40 billion we get Trump. Meaning and understanding emerge gradually as one goes from one electrochemical machine to billions. There is no fundamental reason this could not happen with billions of virtual machines or billions of processors made of gold leaf and compressed air (IIRC, that's a Ted Chiang story). Unless there is something magical about biology, some remainder of Bergson's Vitalism that turned out to be real.
  18. I think there's a basic problem I alas can't seem to get at with your leveraging underdetermination into impossibility. I will reexamine your paper and try to revisit this later. I respect the work you are doing even if I'm uncertain about your conclusions. Well some algorithms are evolutionary, such as those found in metaheuristics. I think it's worthwhile to be acquainted with genetic algorithms. Not all machine states, even at our present primitive level of tech, are simple execution of a line of code. IOW they do not originate from what IT folks call expert rule systems. (I think Searle was quite right to dismiss such ERS coding as incapable of sentience) https://www.turing.com/kb/genetic-algorithm-applications-in-ml Also, the general structure of an argument against algorithmic paths to conscious cognition seems, again, susceptible to the reductio of: It ultimately disallows the coded signals between living neurons to ever emerge as a conscious process. I.e an absurd position. I keep pointing to this vulnerability in AI consciousness is impossible arguments because I think it's a serious one.
  19. In the old days, ETs were made of Mars-ipan. @Moontanman Good one. Boebert apparently has a different version of events. I sense that her basic life goal is to get as much Bad Girl attention as possible. Probably all started with a distant father whose attention she craved. Or maybe she's just an idiot. One shouldn't underestimate the power of innate stupidity, especially if it likes prancing around with an AK-47.
  20. In an attempt to understand a British definition of unbearable, I looked up average July daily high temperatures for Sheffield UK (it seemed sort of in the middle) and then the US plains town I lived in for most of my childhood. The temps were 70 and 92, respectively. As you may imagine, pool halls or snooker halls without AC were as common as unicorns in Kansas. 😀
  21. The Hard Problem seems more about epistemic limits. Many scientific theories are underdetermined but we still accept that they work. Conscious experience, however, can only be directly known from the "inside" (qualia, subjectivity), so a skeptical stance may always be taken as regards any other being's consciousness - you, the King of England, a sophisticated android that claims to be conscious. There is no scientific determination that a being engineered by natural selection (I'm using design broadly, in the sense that a design, a functional pattern, doesn't have to have a conscious designer but may arise by chance) is conscious, so we won't get that with an artificial consciousness either. Bernoulli's principle is NOT underdetermined, because when we design a wing using it we can witness the plane actually flies (to use @mistermack s example). Any principle of the causal nature of a conscious mind, its volitional states, its intentionality, is likely to be underdetermined. But that isn't equivalent to saying it is impossible for such states to develop in an artificial being. I can't really see that we can reject volition developing in a machine because the designers solely possess volition. We humans, after all, as children develop the ability to form conscious intentions to do things by learning from parents and adults around us. Our wetware is loaded with a software package. We don't then dismiss all our later decisions in life as just software programs installed by them. We don't say I am just executing parental programs and have no agency myself. All volition rests with Mom and Dad. This presupposes that machines can never be developed with cortical architecture, plasticity and heuristics modeled on natural systems and thus be able to innovate and possibly improve their own design. The designed becomes the designer - wasn't this argued earlier in the thread and sort of passed over? Second, again you still seem to deny volition by fiat, as if it is a process that simply cannot be transferred. I think you aren't proving this. Why can't an android "baby" be made, which interacts with its environment and develops desires and volitions regarding its world? IOW, not every state in an advanced AI must be assumed to be programmed. That assumption just creates a strawman version of AI, resting on the thin ice of our present level of technology.
  22. Off top of head... Earth tubes Reflective coatings on roofs IR reflecting films on window glass Shifting home activities and sleeping to basements (I know basements are less common in SoCal and the Southwest due to caliche and similar soil issues) geothermal heat pumps Ceiling fans set to run counterclockwise prepare cold meals to avoid stove usage etc.
  23. And, being a process, is emergent rather than intrinsic. If so, then there is no complete model of a conscious process that would decide against it happening in a sophisticated artificial neural network. When you assert the incomplete nature of models, you reject your own OP thesis. It seems that way to me too.
  24. Supplemental reading on how we are straining our biosphere. https://www.science.org/doi/10.1126/sciadv.adh2458 The planetary boundaries framework (1, 2) draws upon Earth system science (3). It identifies nine processes that are critical for maintaining the stability and resilience of Earth system as a whole. All are presently heavily perturbed by human activities. The framework aims to delineate and quantify levels of anthropogenic perturbation that, if respected, would allow Earth to remain in a “Holocene-like” interglacial state. In such a state, global environmental functions and life-support systems remain similar to those experienced over the past ~10,000 years rather than changing into a state without analog in human history. This Holocene period, which began with the end of the last ice age and during which agriculture and modern civilizations evolved, was characterized by relatively stable and warm planetary conditions. Human activities have now brought Earth outside of the Holocene’s window of environmental variability, giving rise to the proposed Anthropocene epoch (4, 5). Planetary-scale environmental forcing by humans continues and individual Earth system components are, to an increasing extent, in disequilibrium in relation to the changing conditions. As a consequence, the post-Holocene Earth is still evolving, and ultimate global environmental conditions remain uncertain. .... Summary, with less technical terminology.... https://apnews.com/article/earth-climate-change-biodiversity-environment-pollution-c8582c3ae0344b5a88cc38cd8e725702
  25. A rather strange turn this thread has taken. I am hoping that toilet seats cannot ever be conscious. Anyway it seems like an argument against emergentism is being made... and its unintended consequence is that humans cannot possess agency, intentionality, or consciousness. 1. Consciousness cannot be accounted for by physical particles obeying mindless equations in accordance with natural laws. (such particle interactions are just machinery, like toilet seats or carburetors or thermostats) 2. Human beings seem to be made up of physical particles. 3. To the best of our knowledge, those particles obey mindless equations, without exception, and without a causal role for higher-order operations. (no downward causation) 4. Therefore, consciousness does not exist. We are all zombies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.