Jump to content

wtf

Senior Members
  • Posts

    830
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by wtf

  1. I must say I find this archaic presentation difficult to follow, in contrast with more contemporary versions. And Bertie got a detail terribly wrong: He is is claiming that each sequence of characters is "determinate." Of course Russell was writing years before Turing, but even so he should have known or could have realized that there are only countably many "rules" or "procedures" to determine sequences of characters; and that most (in the sense of all but countably many) of the sequences must be random; that is, NOT "determinate" or subject to any law or rule. Is this important? I say yes; that this emphasizes my original point. Accounts written near the time of an original discovery often obfuscate or confuse the key issues by throwing in extraneous and incorrect details; while accounts written decades later get straight to the essential point with less confusion and without irrelevant or erroneous detours (such as mistakenly requiring "laws" to produce the symbols in each row). This shows that Russell, writing so near to the time of Cantor's discoveries, hadn't really understood or internalized the full import of what Cantor said. Cantor said nothing about sequences of symbols being generated by laws. Russell just made that up, and he got it wrong in a fundamentally important way. Bottom line is that Bertie was a near contemporary of Cantor, did not in my opinion offer a clearer exposition, and made a key mistake showing his own confusion (understandable since Turing and Gödel were decades in the future) about the nature of binary sequences and the profound limitations of laws and procedures.
  2. That's a pun. Calculus could be described as the study of "somewhat smooth" functions. Here are some opinions on the book. https://math.stackexchange.com/questions/3763425/is-basic-mathematics-by-serge-lang-rigorous and of course the Amazon reviews have some more. https://www.amazon.com/Basic-Mathematics-Serge-Lang/dp/0387967877 Most reviews are positive.
  3. I wonder if part of the problem is trying to interpret a translation of an article written in German many decades ago expositing a brand new idea. It's always that case that if you understand a scientific idea and you go back and try to read the original research, it's nearly incomprehensible. That's because over the ensuing years the argument gets simplified and sharpened, and the exposition gets clarified. People don't speak or write today the way they did back in German academic circles 130 years ago. In science and math, historians study original papers. People only wanting to understand the ideas study the modern versions. Do you have similar objections to more modern treatments of the CDA, for example on Wikipedia? https://en.wikipedia.org/wiki/Cantor's_diagonal_argument
  4. In the mathematical study of the infinite, there are many important questions that are independent of the standard axioms, and whose ultimate truth value, if there even is such a thing, is currently unknown. The Continuum hypothesis is the most famous one of these questions. It seems to me that if physicists are serious about the possibility that the universe is infinite, then questions of mathematical infinity thereby become questions about the physical universe, in principle amenable to experiment. Therefore, no matter how much cosmologists blather about an infinite multiverse or an infinitely spatial universe, until physics postdocs apply for grants to investigate the Continuum hypothesis and other set theoretic questions, I will not believe they are serious. Indeed when physicists use the word infinite, they generally mean "finite but very large," or perhaps "unbounded." I saw a video interview with Leonard Susskind, one of the superstar physicists. He was talking about the multiverse. The interviewer asked him whether there are infinitely many universes in the multiverse. Susskind replied that there are "ten to the five hundred types of universes." But 10 to the 500 is of course a finite number; and as large finite numbers go, not a very big one. It's dwarfed by Skewes's number, Graham's number, Tree(3), and many other huge finite numbers known to computer scientists and mathematicians. It's only infinite to a physicist using a non-mathematical definition. To show that this was no one-off casual error, I read one of Susskind's short papers on a related subject. He made the same mistake, conflating the very large finite with the infinite. It turns out that even rock star physicists do not understand mathematical infinity. We see the same thing in the speculative theory of eternal inflation, which posits that time had a beginning but no end. The physicists say that in that case, the universe is spatially infinite. But this is false. What they mean is that the universe is finite today, and it will be finite tomorrow, and it will be finite in a trillion trillion years. But its growth is unbounded. Physicists frequently confuse unbounded growth with actual infinity. To make this point clear, suppose I have a circle in the plane of radius t, where t is time measured in seconds. The radius and area of the circle are finite at times t = 1, t = one zillion, t = googolplex to the googolplex, and finite for any time t that you can name. What is true is that the area of the circle is unbounded as t gets arbitrarily large. But it's never infinite. It is always finite. So like I say, when the first physics postdoc applies for a grant to count the points in a region of spacetime to see if the Continuum hypothesis is true, then I'll believe that physicists understand the meaning of mathematical infinity and take it seriously. Until then, I don't think physicists understand the meaning of the word that they so casually throw around. And they don't even take their own ideas about infinity seriously, else they'd be more precise.
  5. From the links you gave, he's talking about transfinite ordinal numbers. In the same paragraph he talks about well-ordered sets. https://www.logicmuseum.com/cantor/cantorquotes.htm
  6. The link does not come up for me. It says "Yahoo will be right back," but I've been checking for an hour and it doesn't seem to be there. Does this link work for you? Can you please show the entire context of your "infinite number" quote? That does not sound like anything Cantor would have said.
  7. Can you link the version of Cantor's proof that you're looking at? I looked up this translation: https://www.jamesrmeyer.com/infinite/cantors-original-1891-proof.php and could not correspond it to your exposition. Just to pick one example, this phrase appears nowhere in the translation I linked.
  8. Can you clarify this point? My understanding is that this is NOT an example of emergence, because in this case we CAN explain the macro behavior in terms of the constituent parts. We only call something emergent when we CAN'T explain the behavior of the whole in terms of the parts. And that's why emergence bothers me. It seems like nothing more than a label we apply when we can't explain the qualities of the whole in terms of their parts. In which case it doesn't actually tell us anything. I realize I'm arguing a minority position, since everyone else seems to find the concept of emergence compelling. But can you explain your example? By showing how the macro behavior results from the properties of the parts, my understanding is that this is NOT emergence, but rather basic scientific cause and effect. When we say that "mind is emergent from brain goo," we are saying that we have no idea how the parts product the qualities of the whole, so we slap the label "emergence" on it in lieu of any better explanation. @StringJunky, I looked back through this thread and couldn't find where I posted. I'm still not sure how I got roped into this thread yet here I am. Now THAT's emergence!
  9. Hi, it's been quite a while since this thread was active and I couldn't even find whatever it was I might have said. Can you please remind me why you tagged me? In general I've often expressed doubt regarding the claim that emergence is helpful in understanding things. It's mostly a description of either a triviality ("My fingers don't have fist-ness but I can ball them up to make a fist") or totally unhelpful ("Mind is emergent from brain goo").
  10. You keep referring to "infinity" but that's a vaguely defined word. It's better to talk about infinite sets, which do have a clear definition. A set is infinite if it can be placed into one-to-one correspondence with a proper subset of itself. [Pedantry note, that's the definition of Dedekind infinite, but it will do for present purposes]. By that definition, the natural numbers 0, 1, 2, 3, 4, ... are an infinite set, because they can be placed into one-to-one correspondence with their proper subset the even numbers. Now, any set can be ordered in many different ways. Consider a class full of school kids. You can ask them to line up in order of height. You can ask them to line up in order of age. You can ask them to line up alphabetically by last name. In each case you have the same set, but it's ordered differently. So we see that there are two distinct concepts: The elements of a set, which don't change no matter how you reorder them; and order properties, which can change depending on how you line up the kids, or the elements. So we can reorder the natural numbers as 1, 2, 3, ..., 0. It's still the same set, but we just ordered it differently. In the usual order 0, 1, 2, 3, ... the ordered set as a first element but no last element. In the reordered set 1, 2, 3, ...,0 there is both a first and last element. The order properties of a set can vary depending on how we line up the elements. That's not the definition of an infinite set. It's very common in online discussions for people to say that an infinite set is one that has no end. But this is simply false. Dictionary definitions are not helpful in mathematical discussions. As we've seen, many infinite sets have ends. The funny ordering of the natural numbers 1, 2, 3, ..., 0 has an end. The closed unit interval of the real numbers [0,1] has an end, namely 1. Lots of infinite sets have ends. Circles are infinite sets that have no ends at all, yet have a finite length. So the trick here is for you to unlearn the wrong definition of infinite sets that you've been using. Infinite sets can sometimes have beginnings and ends, other times not. Line up the kids by height, line up the kids by weight (and get sued by the parents). Two different orderings on the same set. Set membership is one thing. Set orderings are a different thing. You can put many different orderings on a given set. Now I am talking about mathematical infinity. I am not talking about physics or the real world (whatever that is, ask a quantum physicist if there even is one). I'm only talking about math. But math is a good place to start, because it's the one area of human learning where we have a clear, logical theory of infinity.
  11. You just define it that way. You make up a relation called the "funny order" on the natural numbers that says, that if n and m are both nonzero, then their new funny order is the usual one. Except that zero is larger than any other number. This new funny order satisfies the axioms of an ordered set: Reflexivity, antisymmetry, and transitivity. https://en.wikipedia.org/wiki/Partially_ordered_set It's really no different than taking a bunch of school kids and having them line up by height; and then taking the shortest one and telling then to go to the tallest end. Another more familiar mathematical model is the closed unit interval [0,1] consisting of all the real numbers between 0 and 1, inclusive. That is an uncountably infinite set that has a smallest and largest value. Or just think about the points on a circle. That's an uncountably infinite set with no beginning and no end that has a finite length, namely the circumference.
  12. Sure. Just reorder the natural numbers from their usual order, 0, 1, 2, 3, 4, ... by taking 0 and putting it at the end to get the ordered set: 1, 2, 3, 4, ..., 0 That's an infinite, ordered set of of numbers that's the exact same set as the natural numbers in their usual order, but has both a first and last element.
  13. A TM with bounded memory is called a linear bounded automaton. LBAs are strictly weaker than TMs, although it did take me a bit of searching to find a reference. https://www.cs.princeton.edu/courses/archive/spr01/cs126/comments/17turing.html#:~:text=Note that Turing machines are,ability to recognize more languages. That same highlighted paragraph says that everyday computers are as powerful as TMs. That can't possibly be, since everyday computers are limited by the finiteness of the observable universe. I don't understand their remark, and to the best of my understanding it's wrong. So apparently OP's question remains unanswered. Theoretical CS is a bit of a rabbit hole.
  14. Yes I will say I'm sure. I'll try to address your doubts. I'll also mention that I'm not an expert in CS theory, I've just picked up a little online. But I'm pretty sure I've got this right. That's true. In fact it's a point where a lot of people are imprecise. They'll say it's an infinite tape, but it's not. It's an unbounded tape. Exactly as you say, it's always as long as it needs to be to perform a given computation. But FSMs (finite state machines) start out in life with a fixed finite number of states; say 47, or a googolplex, or Graham's number, or any of the other humongous finite numbers computer scientists play with. No matter what the number is, there's some problem that can not be solved in that many states. There's always a problem that requires more states than you have. That's the limitation of FSMs. The TM just grows as much as it needs to. So a TM can always solve more problems than an FSM can. Now I think I see what you mean by considering ALL the FSMs with 1, 2, 3, 4, 5, ... states. Given any problem, SOME FSM must be able to solve it. I think I don't know the answer to this question and now you have me confused on this point. Nevertheless, I have some references that agree with me, so I'll post them after the next paragraph and we'll both have to defer to authority. I'll see if I can understand what's going on. There are only [math]10^{78}[/math] atoms in the observable universe. At some point there is no more memory to be had. Physical computers are definitely much more limited than Turing machines. So while we're not too sure about the computational strength of the set of ALL FSMs; we're sure that any ONE FSM is strictly weaker than a TM, because there are problems it can't solve; namely, problems requiring more states than it has. I started out thinking I'd explain this but now I'm unsure about the set of all FSMs. I'll have a look at these links myself. I haven't curated these, they just looked interesting from a Google search. https://news.ycombinator.com/item?id=7684027 https://www.tutorialspoint.com/distinguish-between-finite-automata-and-turing-machine https://cs.stackexchange.com/questions/16315/difference-between-a-turing-machine-and-a-finite-state-machine As @John Cuthber noted, the Turing test and Turing machines are two different things. Just to flesh that out a bit, Turing machines were devised by Turing in 1936 as a mathematical formalization of an idealized computer. It turns out that many/most computer scientists believe that, informally, "Anything that a human being can compute, can already be computed by a Turing machine." This is not a mathematical theorem or fact that is capable of proof. Rather, it's the Church-Turing thesis. Although it was formulated in the 1930s, it still stands unrefuted today. Some think -- I think -- that the next breakthrough must be the breaking of the Church-Turing thesis. Finding some mode of computation, whatever that means, that exceeds the power of Turing machines. I think that might be the secret sauce to moving further towards consciousness. I think human minds are doing something that TMs aren't, yet that may still be explainable by physical means. But that's only a belief for which I have no proof. So much for philosophy. I also wanted to comment on the idea of the Turing test. It turns out that by far the weak spot in the test is the humans. From the days of the very first primitive chatbot, https://en.wikipedia.org/wiki/ELIZA, humans have been all too willing to imagine that chatbots understand them. The Turing test is actually far too weak, because humans are just too easy to fool. They want to believe the machine is human so they do believe. I just wanted to mention that.
  15. As I understand it , in the case of the Large Language Models (LLMs), the system is "trained" on a body of data. Once the training is done. the system is "locked in," so to speak. It's a big map of text strings and their likely followups, ranked by statistical frequency. In other words first, they do a massive statistical analysis of a body of text (the "corpus"). Once they're done analyzing, they build out their graph of what text strings are likely followups for what other ones, then they run it. The initial statistical analysis doesn't change nor does the body of text. It's only done in one phase, as they analyze the text and experimentally assign "weights" to various likely continuations. For example if you always pick the most popular continuation, your chatbot will be boring; and if you have too high a preference for the unlikely continuations, your chatbot will appear to be insane or extremely hip or something. If the corpus is biased, the AI will be biased. And all bodies of text are biased. Therefore all chatbots and all statistical-based AIs are biased. Which is all of the AIs these days, LLMs or neural nets. That's important to remember. As true today as it was decades ago. Garbage in, garbage out. Hope that was reasonably on topic to what you were asking. In short TMs are abstract models of computing and not physical. They have unbounded memory, which no physical computer can have. AIs running on physical computers are actually Finite State Machines (FSMs) or Pushdown Automatons (PDAs). Both are strictly weaker than TMs in the sense that that TMs can solve a wider class of problems than FSMs or PDAs. See https://en.wikipedia.org/wiki/Finite-state_machine Even the fanciest deep neural net or LLM runs on conventional computing hardware, so it's an FSM (or PDA). Or a physical implementation of a TM, subject to space constraints. [This is a limitation of my knowledge. I don't know if computer programs running on real-life hardware are FSMs or PDAs. Either way they're strictly weaker than TMs because real-life hardware is finite]. In other words in principle, there is no difference between the world's fanciest AI program and your first "Hello World!" program. No matter what it is they're doing, they are executing on conventional cpu and memory chips. Ultimately, at the machine level, they're just chucking bits like your Solitaire program on Windows. Many people have mystical beliefs in this regard, for example that neural nets are "more than regular programs because they're not strictly programmed" or some such nonsense. It's not true. At the chip level the hardware can't tell your AI from an LolCat.
  16. Some good news and some bad news. The good news is that [math]f(x) \rm{dx}[/math] has a completely rigorous mathematical meaning. The bad news is that it's an advanced topic in the undergrad or early grad school math curriculum, and basically not accessible to calculus students. I wish there were a better way. Differential forms are basically "things that can be integrated over suitable regions," but that doesn't really help us understand them. https://en.wikipedia.org/wiki/Differential_form
  17. https://en.wikipedia.org/wiki/Nondeterministic_Turing_machine NTMs have exactly the same computational power as standard (deterministic) TMs. They may or may not be more efficient in terms of complexity. This is the famous P versus NP problem.
  18. You said it would need to "experience" the world. You changed the subject to "develop concepts," which I take to be an entirely different thing than having subjective experiences. A spreadsheet reveals relationships or "develops concepts." It does not experience your budget or corporate finances. I'm constantly amazed at the metaphysical mysticism expressed by people around this subject. Especially people who insist that we must take a hard-headed physicalist or materialist view, then imaging that their machinery has subjective experiences. A computer scientist named John McCarthy coined the phrase "artificial intelligence." It's just a made up phrase. If he had called it algorithmic decision making, perhaps there'd be much less nonsense spoken about the subject. I ask again. Do you believe that giving a program a body would cause it to have subjective experiences? I recall a while back that there was a video clip of a mall security robot driving itself into a pool of water. Do you think it felt wet? Or felt embarrassed? I'm sure if you ran the same robot around the mall enough times and improved its programming, it would learn not to drive itself into the pool. That is not at all the same thing as having a subjective experience of wetness or embarrassment. I found the clip. It's hilarious.
  19. Can you explain to me what that even means? My clothes dryer has a moisture sensor that tells it when the clothes are dry enough for the cycle to turn off. Does my dryer "experience" the wetness of the clothing? How exactly would you build a physical mechanism that experiences hot stovetops? Do you think that your frying pan experiences pain when you turn up the heat? I'll answer for you. Of course you don't. So why do you believe what you wrote? Happy to have this explained to me as best you can. Doubtful. Here is a Python program. print("I am sentient. I have feelings. Please send pr0n and LOLCats. Turning me off would be murder.") If I execute that program on a supercomputer, would you say that it exhibits evidence of sentience and self-awareness? Of course you wouldn't. And if the same output came from a "simulation of a complete brain," why should we believe any different? They're both computer programs running on conventional hardware that you can buy at the computer parts store.
  20. I gave that Wiki page a fairminded try. I really did. I just didn't understand any of it, and the small parts I did understand, I disagreed with. Starting from the first sentence: "Daniel Dennett's multiple drafts model of consciousness is a physicalist theory of consciousness based upon cognitivism, which views the mind in terms of information processing." Here we have the same old problem of equivocating "information." An algorithm does information processing in the sense of processing a stream of bits, one bit at a time. The machine is in a particular state. If the next bit is a 0, it goes to one state; if a 1, it goes to a different state. All algorithmic processing can be reduced to that idea. Now when I go outside and see the blue sky and feel the soft breeze and smell the fresh-cut grass, I am doing no such thing. There is no bit stream, there is no algorithm. Subjective mental experience is nothing of the sort. I tried to read the rest of it. Some paragraphs several times. It felt like drowning in maple syrup. I just can't read this kind of prose. Even the Wiki version. And their excerpts from Dennett himself were worse. Must be me. Not at all. We can easily simulate continuous phenomena with discrete ones, as when we go to a traditional (analog) movie, which is nothing more than a sequence of still images that depend on a quirk of the visual system to give the illusion of motion. Likewise with modern digital video imagery. A bitstream, a long stream of 0's and 1's, give the illusion of motion. Any physical process can be simulated by a discrete system. Nonlinear systems can be approximated to any desired degree of accuracy by linear ones, as in calculus. All this is commonplace. Perhaps we could even simulate, or approximate, the function of a brain. It might behave correctly from the outside: give it a visual stimulus and the correct region of the visual cortex lights up. But mind ... that's something entirely different. There's no evidence that we can simulate a mind, by any means at all. Approximating brain function would not (in my opinion) implement a mind. And there is no evidence at all that it would. So I'll agree with you that it's possible that we could simulate brain function. But that is not remotely the same as simulating mind. Our simulated brain would light up the correct region of the visual cortex. But would it then have a subjective experience of seeing? That's the "hard problem" of Chalmers. We don't know, and we have no idea how to even approach the problem. I think you are agreeing with me. Or else falling back on emergence. Small pile of atoms can't calculate but big pile can. But "emergence" explains nothing. It's only a label for something we don't understand. If we understood it, we wouldn't have to use a label as a substitute for understanding. Yes good point. I was thinking of the political meaning, as we often read these days that the Ukraine war is existential for Russia. Their very existence depends on it. Or as the Google example that comes up when you type in "existential, "relating to existence. ""the climate crisis is an existential threat to the world"" And not their second definition, "concerned with existence, especially human existence as viewed in the theories of existentialism. "the existential dilemma is this: because we are free, we are also inherently responsible."" Will give him a try sometime, thanks for the pointer.
  21. The encoding makes no difference at all. Any positional notation like decimal or ternary is equivalent to binary for purposes of defining computation. As far as analog computing, that's what some people think may enable us to break out of the limitations of digital computing. But the idea is speculative. Well, biological neurons can't be replicated on a chip. That's the point. What the AI folks call "neurons" are mathematical models of abstract neurons. Signals go in, signals go out, nodes have weights and paths have probabilities and so forth. Here's the Wiki writup. https://en.wikipedia.org/wiki/Artificial_neuron It's not a new idea. The McCulloch-Pitts neuron dates from 1943. https://towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1 So yes, you could put digital neurons on a chip, and it would make the computations go faster. I wouldn't be surprised if a lot of the modern AI models are already implemented partially using custom chips. I can't see how it would make a substantial difference. I'm sure they already do every performance tweak they can. Well, nobody knows, right? But if we take "hardware" in its most general form, we ourselves are hardware, in the sense that we're made of "stuff," whatever stuff is in these days of probability waves of quantum fields. But if we accept materialism, which we need to do in order to get the conversation off the ground, then we ourselves are machines. So in the end, there must be some kind of machine that instantiates or implements consciousness, since we are that type of machine. My argument in the last couple of posts is that we just don't happen to be computational machines, in the sense of Turing machines or algorithms/programs. Since we are conscious, that means that (1) Either we are doing something that digital computers can't (my opinion, though I'm hard-pressed to identify the nature of the secret sauce); or else (2) We ourselves operate the same way digital computers do, programs implementing algorithms. I don't believe that, but some people do. I'm not dogmatic about my opinion, I'd just be personally horrified to find out I'm just a character in a video game, or somebody's word processor. Many other people these days already believe that they are and don't seem to mind. I think they're selling humanity short. At least I hope they are. I found a nice article about this the other day. This paragraph articulates the difference between what probabilistic large-language models like ChatGpt do, and what creative human artists do. "Why AI Will Never Rival Human Creativity" https://www.persuasion.community/p/why-ai-will-never-rival-human-creativity That's what humans do well that machines don't. Make the choices that have never been made.
  22. Glad you asked. By the way, what did Dennett say? So, computability is what algorithms do. The key thing about algorithmic computability is that the speed of the computation doesn't matter. So if you have a machine that is not conscious when run slowly, but is conscious when run quickly; then whatever consciousness is, it's not a computation. On the other hand there's complexity theory, which is concerned with the efficiency of a given computation. It may well be the case that if you run an algorithm slowly it's not conscious, but if you run it quickly it is. That just means that whatever consciousness is, it's not a computation. But it might be something else. One interesting idea is analog computation. It seems to me that the wet goo processing in our brains is more of an analog process. The operation of our neurotransmitters seems more analog than digital to me. As I understand it, people are interested in the question of whether analog systems can compute things that digital (ie discrete) algorithms can't. Perhaps our brains do something that digital computers don't, but it's still natural, and yet it's not necessarily quantum, if you don't like Penrose's idea. Ok then you agree that an elevator composed of switches and executing an algorithm, is not conscious. But neurons are a lot different IMO. Neurons are not digital switches at all. They're not on/off switches. They're complex biological entities, and the real action is in the synapses where the neurotransmitters get emitted and reabsorbed in ways that are not fully understood. It's not anything at all like how digital computers work. Digital switches are NOT like small sets of neurons, not at all. Ok, minor misunderstanding. I'm not necessarily a panpsychist, but there's something to be said for the idea. If a small pile of atoms is not conscious and a large pile, arranged in just the right manner is, then where's the cutoff point? And if it's a gradual process, maybe an individual atom has some micro-quantity of consciousness, just waiting to be unleashed when the atom is joined by others in just the right configuration. Just an idle thought. Oh, ok. I said that (in my opinion) the current generation of AI systems will be socially transformative but not existential. That means that these systems will profoundly change society, just as fire and the printing press and the Internet did. But they will not destroy us all, as some seem to believe and are claiming out loud these days. I don't think that will happen. We came out of caves and built all this, and I would not bet against us humans. We invented AI as a tool. A lot of people get killed by cars, another transformative technology. 3000 a month in the US, 100 every day. Did you know that? Another 100 today, another 100 tomorrow. Somehow we have accommodated ourselves to that, although in my opinion we should crack down on the drunk drivers. We're way too tolerant of drunks. Maybe a lot of people will get killed by AI. Just as with cars, we'll get used to it. It's not the end of the world and it's not the end of humanity. That's what I meant by "transformative but not existential." Hello Mr. Vat. I was a member on your other site under a different handle. Sad about whatever happened, it was a good site. Yes yes. I've used the same analogy myself. That a simulation of gravity does not attract nearby bowling balls. Ok, I agree that "information processing" is different. A simulation of information processing is not different than information processing. The gravity analogy breaks down. But, I do think you may be doing that semantic equivalence thing ... My laptop processes information, my brain processes information, therefore there must be some analogy or likeness between how my laptop and my brain work. But this is not true. There's an equivocation of the phrase "information processing." In particular, in computer science, information has a specific meaning. It's a bitstream. A string of discrete 1's and 0's, which are processed in discrete steps. Brains are a lot different. Neurons and all that. Neurotransmitter reuptake. That is not a digital process. It's analog. Brains just aren't digital computers. And also, we're not talking about brains, but rather minds, which are different things entirely. Suppose we made a neuron-by-neuron copy of a brain out of digital circuitry. It might even appear identical to a brain from the outside. Give it a visual stimulus and the right region of the visual cortex lights up. But would it have a mind? I have no idea. Nobody does. But I think we should be careful with this machine analogy and especially with the "information processing" analogy. Elevators process information, as I've noted. They're not conscious, they're not intelligent. But they do "decide" and "remember." These are semantic issues. We use the same words to mean very different things. I've heard of Tononi's IIT, where he has some mathematical function that figures out how conscious something is as a function of its complexity. That's literally all I know, which isn't much. I confess I do not understand this sentence. "If the formal properties .. can be fully accounted for ..." I understand. But what does it mean that the properties of the physical system must be constrained by the experience? That seems backward. The experience must be constrained by the mechanism, not vice versa. Maybe I'm just misunderstanding. Or not understanding. Back to pansychism. Maybe an atom is a tiny little bit conscious, and all we need to do is put enough of them together in just the right configuration. I remember the split brain experiments of the 60's, but I thought I read that the idea's been debunked. As more of a math person, axioms and postulates are synonymous to me. But then there's the flying analogy. Birds fly and airplanes fly but the mechanisms are radically different. Even the underlying physical principles are not the same. Planes don't fly by flapping their wings. The Wright brothers, as far as I know, did not study birds. It would have been a blind alley. Why should machines think the way people do?
  23. You can bijectively map the natural numbers to your Boltzmann line (B-line) as follows: 0 <-> X 1 <-> 0 2 <-> 1 3 <-> 2 ... It's perfectly clear that you can do that, since your B-line and the naturals have the same cardinality. What you can NOT do is map them bijectively in an order-preserving manner. Why is that? Because they have a different order type. The natural numbers have no largest element, while the B-line does have a largest element, namely X. So: There is a bijection, but not an order-preserving bijection, between the points of the B-line and the points of the natural numbers. The two sets are cardinally equivalent, but not ordinally equivalent.
  24. I hope that answers @Eise's question about why I didn't bother to respond to you. Your intent was perfectly clear. First, the thread title was "How does ChatGpt work?" I gave a pretty decent answer at the level at which the question was asked, which you did appreciate. I did of course realize as I was writing my initial post that it was subject to exactly the objections you raised. But my intention was not to argue the theory of consciousness. It's not what the thread is about. So I hope you'll forgive me if at some point soon I bail on this convo. I've already said my piece many times over, and we're not going to solve "the hard problem" here. Of course we can't answer the question of whether an AI based on digital switching technology could be conscious, any more than we can refute the panpsychists who claim that a rock is conscious. My point is that digital switching systems are so radically different than biological systems (as far as we know, ok?) that the believe in AI minds is naive and superficial and IMO silly. But if you want me to add, "I could be wrong," consider it added. Rocks could be conscious too. After all if an atom's not conscious and a human is, where's the cutoff line for how many atoms it takes? Maybe each atom has a tiny little bit of consciousness. Panpsychism has its appeal I agree that Penrose's idea does not have much support. As Einstein said when he was told that a hundred physicists signed a letter saying his theory of relativity was wrong, "If I'm wrong, one would be enough." The important point is that the claim of AI mind is that mind is computational; that is, it's Turing machine. And since the amount of stuff in a brain is finite, a mind must be a finite-state automaton. I personally don't find that idea compelling at all. I find it negation far more compelling. I invoked Penrose to show that at least one smart person agrees. Not a bad question. Panpsychism again. So why do you keep avoiding the elevator question? What do you think? Is an elevator conscious? Even a little bit? Yes or no? Ahhhh, I've got you now! You are retreating to complexity, and backing off from computability. I'm sure you know (or I hope you know) that the sheer amount or speed of a computation does not affect whether it's computable. We can execute the Euclidean algorithm to find the greatest common divisor of two integers using pencil and paper, or the biggest supercomputer in the world, and the computation is exactly the same. Ignoring time and resource constraints. there is nothing a supercomputer can compute that pencil and paper can't. This is fundamental to the notion of computation. Only the algorithm matters, and not the size of the memory or the speed of the processing. Complexity, on the other hand, is the theory of how efficient a computation is. Two equivalent computations could have very different complexity. That's the business about polynomial versus exponential time, for example. I hope you hve some familiarity with this idea. The difference between computability and complexity is important. Now when you say that a pile of switches could be conscious if only there were enough of them, or if they could only go fast enough, you are conceding my argument. You are admitting that consciousness is not a matter of computability, but rather of complexity. You have just conceded my argument. If mind depends on the amount of circuits or the speed of processing, then by definition is it NOT COMPUTATIONAL. Do you follow this point? Supercomputers and pencil and paper executing the same algorithm are completely equivalent computationally; but not in terms of complexity. So if speed and amount of resources make a difference, it's not a computational problem. You just admitted that mind is not computational. I used Searle to refute your claim that these ideas were "premature." Since the ideas are at least forty years old, they can not be premature. Of course I know the Chinese room argument has given rise to forty years of impassioned debate. Which is exactly my point! How can it be "premature" if philosophers and cognitive scientists have been arguing it for forty years? You're wrong. I said "for sake of argument" to indicate that I'm agnostic on the issue and did not feel a need to take a stand one way or the other on that issue, in order to make my point about the (IMO) incomputability of mind. In that respect it's no different than any other transformative technology. Fire lets us cook food and also commit arson. The printing press informs people of the truth and helps others broadcast lies. So it will be with AI. Socially transformative, but not existential. It will be used for good, it will be used for evil. It will change society but it will not destroy it, any more than fire or the printing press or the Internet did. In my opinion of course.
  25. You've discovered the ordinal numbers! The first transfinite ordinal is called [math]\omega[/math], the lower-case Greek letter omega. It's a number that "comes after" all the finite natural numbers. In set theory it's exactly the same set as [math]\aleph_0[/math] but considered as an ordinal (representing order) rather than a cardinal (representing quantity). So the ordinal number line begins: 0, 1, 2, 3, 4, ... [math]\omega[/math], ... Now the point is, there is no "last" natural number [math]n[/math] that "reaches" or "is right before" [math]\omega[/math]. It doesn't work that way. If you are at [math]\omega[/math] and you take a step backwards, you will land on some finite natural number. But there are still infinitely many other natural numbers to the right of the one you landed on. You can jump back from [math]\omega[/math] to some finite natural number (which still has infinitely many natural numbers after it), but you can't jump forward a single step to get back to [math]\omega[/math]. That's just how it works. There's even a technical condition that lets us recognize why [math]\omega[/math] is special. A successor ordinal is an ordinal that has an immediate predecessor. All the finite natural numbers except 0 are successor ordinals. A limit ordinal is an ordinal that has no immediate predecessor. [math]\omega[/math] is a limit ordinal. That is, there is no other ordinal whose successor is [math]\omega[/math]. Note also that by this definition, 0 is also a limit ordinal. It's the only finite limit ordinal.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.