Jump to content


Senior Members
  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by wtf

  1. That's technical analysis. It's bullshit. The stock doesn't know what it did yesterday any more than a coin knows what it did in the last 50 flips. However, it's still rational to know a little about it, because so many other players believe in it! You need to know what the chart watchers are thinking in order to make your own moves. It's rational to pay attention to the irrational beliefs of others. Strange but true.
  2. And the converse question. The Flying Spaghetti Monster created you five minutes ago, with all your memories as they are. Does it matter that you've never had the experiences you thought you had, as long as you have the memories?
  3. There's a test called Winograd schema, named after Terry Winograd, an early computer scientist. https://en.wikipedia.org/wiki/Winograd_schema_challenge Winograd's original example was: The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. A human knows that "they" refers to the councilmen if the verb is feared, and to the demonstrators if the verb is advocated. An AI presumably would have a hard time with that, because you need to know what the words mean. Winograd worked in the pioneering days of AI, I have no idea if the current generation of machine learning algorithms can figure these kinds of things out. Although the following page refers to a Winograd schema challenge from 2016, and there's a link on that page to a report on Winograd schemas issued in 2020. So evidently these are still considered an interesting challenge for AIs. This page has a lot of other relevant links. https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html Computers have no ideas of anything, they only consist of switches that can each be in one of two states, 1 or 0. The meaning is supplied by the humans who program and interpret the bit patterns. Just as we can program a computer to do finite arithmetic, we can program a computer to do infinite arithmetic. For example given two cardinal numbers [math]\aleph_\alpha[/math] and [math]\aleph_\beta[/math], we have [math]\aleph_\alpha + \aleph_\beta = \aleph_\alpha \cdot \aleph_\beta = \max(\aleph_\alpha, \aleph_\beta)[/math] In other words cardinal addition and multiplication are ridiculously simple; the sum and product of two cardinals is just the larger of the two. And cardinal subtraction and division aren't defined. Computers can also represent many of the countable ordinals, namely all the ordinals below the Church-Kleene ordinal, which is the first non-computable ordinal. The computer wouldn't have any idea that it's computing with transfinite numbers, any more than it knows that it's adding 1 + 1. It's just flipping bits according to instructions. Computers don't have ideas. That's critical to understand in all discussions of machine intelligence. Computers flip bits. Even the latest deep neural net is just flipping bits. You can see that by remembering that the AI programs run on conventional hardware. Their code is ultimately reducible to bit flipping; and, as Turing pointed out, to making marks with a read/write head on a long paper tape. You could in principle execute AI code using pencil and paper. That doesn't mean a computer can't be conscious or self-aware. Some people (not me) believe that the human brain operates by bit flipping. I think that's wrong, but some people do believe it. But whatever it is that AI's do, they do not do anything different than what regular vanilla computer programs do. They're organized cleverly for data mining, but they are not doing anything computationally new.
  4. I am happy to find some agreement on an online forum. That's so rare! The original post was all about legal issues and patent decisions and so forth. It's perhaps unclear to some that courts and legal and political processes are not subject to philosophy or cognitive science or computer science or even rationality. The law is whatever some court says it is. If the OP wanted to ask, "Can a machine be sentient?" or some variation on that theme, that's what they should have asked. Point being that tools are tools, and just because a washing machine has a chip does not mean that it's sentient. Ok this is a common point made by the proponents of machine learning (ML) and neural network (NN) approaches to weak AI. Weak meaning specialized for a purpose, chess or driving or whatever, and not the mythical general or human-like AI which we are no closer to now than we were in the 1960's when the AI hype train got started. Let me make some brief points. The algorithms don't come from human programmers? Well that's what 4GL languages have been doing since the 1970's. The classic example is relational databases. You ask the computer to "Show me the names of yellow fruits," and it outputs, "Lemons, bananas, papayas." You did not tell it how to traverse the database. In fact you gave it the "what" and the database system itself figured out the "how," based on optimizing the query. Programs that write algorithms are very common and nothing at all new or revolutionary. It's true that the NN systems assign weights to nodes, and get "trained" on sample data and so forth. I know how this works. But it's not true that the algorithms are not created by humans. In fact they are. We could in principle trace the algorithms using pencil and paper. In fact the ML students find out that no matter how gee-whiz the field first sounds, what they actually end up doing is learning to optimize matrix multiplications. ML and NN's are algorithms created by humans. It may be true that "we can't easily tell what the program is doing." But in fact that's true of every nontrivial program anyone ever wrote. You write a fifty line program and it doesn't do what you think you told it to do, and you spend the rest of the day figuring out why. Or some maintenance programmer gets hired by a bank and has to dive into the hundreds of millions of lines of code that date back to the 1960's. You try to fix things and not break other things. Very few actual professional programmers have the faintest idea how the systems they maintain actually work. Most working programmers work with APIs and inside frameworks or application servers and have no idea even of the local operating system facilities. Let alone the system and hardware layers. The average working programmer has very little idea of how their programs are working, especially in the modern age. I could belabor this point but I hope you catch the sense of my response. The most advance "deep" neural net in the world is a conventional program running on conventional hardware. The nodes and paths are data structures, you train the network on sample data (the "corpus" as they call it), and it cranks out impressive-looking feats of weak AI. But it's in principle a physical implementation of a Turing machine. You could run the code with pencil and paper. It runs on standard, conventional hardware. And frankly a prosaic, everyday system like the global supply chain is more complicated than a chess engine. Nobody knows how the global supply chain works. It's used for grubby commerce so nobody puts it on a pedestal, but it's probably a billion or more lines of code distributed over thousands of individual enterprises. Nobody has any idea how it works. But to your main point: Even an optimizing compiler outputs algorithms that the human didn't write. It improves them, sometimes in creative ways. Programs have been writing other programs for a very long time. We have no idea. We don't know how minds work. Show me the part of the brain responsible for your self-awareness and subjective sense impressions. You can't do that because nobody in the world can. And surely we don't have bits and registers. The mind does NOT work anything like a digital computer. Only in the popular AI-hyposphere is this regarded as even a remotely sensible debating point. There's just no evidence for the thesis. Brains don't flip bits. And even if you push the neural network analogy, even that fails dramatically. NN's do pattern matching, they haven't the common sense or awareness of the world of a newborn human. That's a claim for which there's no evidence. It's just not true. It could, if evidence for your claims showed up someday. Since our best cognitive scientists haven't the foggiest idea how the mind works, you haven't the evidence to support your claim. Rapidly developing AI? Rapidly developing weak AI, yes. Playing chess and driving cars. Strong, or general AI? Hasn't made a lick of progress since the hype train got started in the 1960's. The question is about what some court might say. Here's one case that comes to mind. In 2017 Saudi Arabia granted citizenship to a robot. This decision was not made by philosophers or computer scientists or, frankly, any kind of deep thinkers. It was decided by politicians. If politicians pass a law decreeing that a washing machine with a chip is sentient. then it's legally sentient. It's not actually sentient, but in legal matters, it's the law that counts and not rationality. "The law is an ass," is an old saying. If the original question was, "Can a machine be sentient?" that would be something we could discuss. But if the question is, "Can a court rule a machine is sentient?" then of course it can. Courts of law do all kinds of crazy things. Well I'll say no on general principles. Since we have no idea what makes sentient consciousness, it's doubtful we could create one. We don't know enough about the mind, by a long shot. Can a computer be capable of "conceptualizing new ideas on its own?" No. At best it can do clever datamining. A classic case was a sophisticated AI that learned to distinguish wolves from huskies, something that's difficult for humans. It had a terrific success rate. Then they realized that the training data always showed huskies in a snowy background. What the program was actually doing was recognizing snow. https://innovation.uci.edu/2017/08/husky-or-wolf-using-a-black-box-learning-model-to-avoid-adoption-errors/ Another recent datapoint was when Facebook labelled the Declaration of Independence as hate speech. https://www.cnn.com/2018/07/05/politics/facebook-post-hate-speech-delete-declaration-of-independence-mistake/index.html With real life examples as those, you cannot make a case that programs are doing anything even remotely like what humans do in terms of creativity and awareness of the world. Weak AI systems do sophisticated pattern matching on their training data, nothing more. You can easily find dozens if not hundreds of similar examples. Here's one, AI fooled by changing a single pixel. https://www.bbc.com/news/technology-41845878 The hype around AI is really off the charts. It's easier to put into perspective if you know the history of the AI hype cycle. This article traces it back to the 1950's. True AI sentience has ALWAYS been "right around the corner." https://www.kdnuggets.com/2018/02/birth-ai-first-hype-cycle.html Regardless of whether the particular system you're asking about can be creative, I hope you can see that the inventor is SUING someone in a court of law. Even if they were to win their lawsuit, that would not make their machine creative. It would only mean they got some court to make a ruling, like the Indiana legislature that once (almost) decreed that the value of pi was 3.2. The only reason the bill didn't pass is that there just happened to be a mathematician present in the legislature that day. https://en.wikipedia.org/wiki/Indiana_Pi_Bill
  5. You are asking a legal question, not a metaphysical one. If some court rules a washing machine a person, then legally a washing machine is a person. Modern washing machines have chips and "make decisions." That's a lot different than asking if an "AI" in quotes, since there is no true AI, can be sentient or creative or can invent things. We already have programs that write other programs, for example compilers and 4GL languages. Doesn't mean anything. The program in question runs on conventional hardware and is in principle no different than any other conventional program. Machine learning and neural net approaches are a clever way to organize datamining, but all they can do is identify patterns based on what's already happened. By definition they can't create. But again, those are arguments for the metaphysical position that contemporary computers are not sentient/creative/self-aware/whatever. You are asking a legal question though, and the answer to that is, "Whatever a court decides is the law." Are you suggesting that my hammer owns the chair I built with it? Such a view is nonsensical. Does your computer own the words you typed into this forum? A program doesn't invent anything, it just flips bits according to an algorithm. An algorithm written by a human. A computer program is a tool.
  6. I found a picture of the mapping from (0,1] to (0,1). Instead of using the inverse powers of 2, it uses the sequence 1/2, 1/3, 1/4, 1/5, ... See how this works? The 1 at the right end of (0,1] gets mapped to 1/2. Then 1/2 gets mapped to 1/3; 1/3 gets mapped to 1/4, and so forth. The mapping is reversible so this is a bijection.
  7. Walk through it step by step. Convince yourself that the mapping is: * Injective: Different inputs go to different outputs; and * Surjective: Everything in the target set gets hit. By definition, the mapping is a bijection. Stepping through the details will make the idea clear. To see exactly how it works, I suggest using the method I showed at the end (instead of my solution). Map 0 to 1/2, map 1/2 to 1/4, 1/4 to 1/8, etc. What happens is that 0 gets absorbed and all the powers of 2 get slid down one space, mapping [0,1) to (0,1).
  8. 1/x is a bijection from (0,1) to (1, infinity). There's a bijection from [0,1) to (0,1). How do we do this? The rationals in (0,1) are countable so they can be enumerated as [math]r_1, r_2, r_3, \dots[/math]. Now you define a bijection [math]f: [0,1) \to (0,1)[/math] as follows: If [math]x[/math] is irrational, then [math]f(x) = x[/math]. That is, [math]f[/math] leaves all the irrationals unchanged. [math]f(0) = r_1[/math]. And [math]f(r_n) = r_{n+1}[/math]. In other words [math]f(r_1) = r_2, f(r_2) = r_3[/math], and so forth. Now you can see that every rational in (0,1) is in the range of [math]f[/math], and [math]f[/math] is reversible; that is, it's both injective and surjective. So it's a bijection. Graphically, we've mapped the rationals like this: [math]0 \to r_1 \to r_2 \to r_3 \to r_4 \to \dots[/math], in effect sliding 0 into the enumerated rationals in (0,1), which do NOT include 0. This trick amounts to embedding Hilbert's hotel in the unit interval. Initially the hotel is full, with guest 1 in room 1, guest 2 in room 2, etc. Now an "extra guest" named 0 shows up. We move guest 1 to room 2, guest 2 to room 3, and so forth, leaving room 1 empty. Then we move the extra guest 0 into room 1. In effect we've made room for the extra point at 0 by sliding everyone else up one room. To biject [0,1] to (0,1) we just do the same trick twice. Map 0 to [math]r_1[/math], map 1 to [math]r_2[/math], map [math]r_n[/math] to [math]r_{n+2}[/math], and leave the irrationals alone. Then to map [0,1) or [0,1] to (1, infinity), just compose the bijections. First map [0,1) or [0,1] to (0,1), then map (0,1) to (1, infinity). There are some other clever solutions in these Stackexchange threads. https://math.stackexchange.com/questions/28568/bijection-between-an-open-and-a-closed-interval https://math.stackexchange.com/questions/213391/how-to-construct-a-bijection-from-0-1-to-0-1 The checked answer by Asaf Karagila in the first link gives a completely explicit solution by sliding along 1/2, 1/4, 1/8, etc., without the need for a mysterious enumeration. That is, to map from [0,1) to (0,1) you map 0 to 1/2, 1/2 to 1/4, etc., and leave everything else alone. To map [0,1] to (0,1) you map 0 to 1/2, 1 to 1/4, 1/2 to 1/8, 1/4 to 1/16, etc. I like that one, clean and simple.
  9. I've wondered the same thing, but about the scientific rather than philosophical effects. We looked at the stars and drew pictures in the sand and worked out trigonometry and early theories of astronomy. What if everything else had been the same but the earth were covered by clouds? We'd still have gravity, but we couldn't look at the heavens. How would science have progressed? Hope this isn't too much of a thread hijack but it's a question that's long been on my mind.
  10. wtf

    prime dilemma

    Yes exactly. I suppose you'd have to subtract the two integers and see if the result is integer-divisible by the modulus. That seems like the most sensible way. Probably not the only way.
  11. wtf

    prime dilemma

    Great question. Yes they are "the same but a little different." In math, mod is an equivalence relation on the integers. We say that [math]a \equiv b \pmod n [/math] for two integers [math]a, b[/math] if it happens to be the case that [math]n | a - b[/math], where the vertical bar means "divides." So for example [math]5 \equiv 3 \pmod 2[/math] and [math]17 \equiv 5 \mod 3[/math]. You can verify that this is an equivalence relation: It's reflexive, symmetric, and transitive. It partitions the integers into [math]n[/math] pairwise-disjoint equivalence classes. It's a fundamental concept in number theory. In programming languages, mod is a binary operator that inputs two integers and outputs a third: 5 % 3 = 2. It inputs 5 and 3, and outputs the remainder when 5 is integer-divided by 3. The math and programming concepts are closely related. [math]a % n[/math] is the remainder when [math]a[/math] is integer-divided by [math]n[/math]; that is, the result of [math]a % n[/math] is the smallest positive element of the equivalence class of [math]a[/math] under the [math]\pmod n[/math] equivalence relation. This turns out to be the same as the remainder under integer division. The tl;dr is that in math, mod is an equivalence relation that inputs a pair of numbers and a modulus and returns True or False; whereas in programming, mod is a binary operator that inputs an integer and a modulus and outputs the smallest positive member of the equivalence class of the integer mod the modulus. The difference is shown by, say, the fact that [math]17 \equiv 14 \pmod 3[/math]; but [math]17 \ \% \ 3 = 2[/math] and not [math]14[/math]. That is. even though mathematically [math]17 \equiv 14 \pmod 3[/math], in a programming language, [math]17 \ \% \ 3 == 14[/math] would evaluate to False. That's an example that illustrates the difference.
  12. wtf

    prime dilemma

    I don't think they have names, but there's a famous theorem of Fermat that says an odd prime is the sum of two squares if and only if p = 1 (mod 4). https://en.wikipedia.org/wiki/Fermat's_theorem_on_sums_of_two_squares
  13. I think it's interesting that this question always gets asked in terms of the Internet, because that's the only complex computer system people have a daily experience of. But it's far from the most complex and mysterious computer system. If any computer system were to become self-aware, my bet would be the global supply chain. The system that moves raw materials from here to component factories there to integration sites somewhere else to distribution points somewhere else and ultimately puts a finished consumer good on the shelf at your local big box store, at a price point attractive to buyers yet high enough to ensue a profit for every single actor along the chain. The global supply chain is an immensely complicated system, far more complex than the Internet, whose architecture is generally well understood. It involves maintaining just-in-time inventories, tracking taxes and tariffs across international and local borders, integration of air, sea, and land transportation, predictions of consumer demand and raw material supply, and all the rest of it. It's a system that nobody sees but that affects literally every physical thing around us, from the furniture we sit on to the food in the fridge, and the fridge itself. It touches everything. You can turn off the Internet in your home, but not the global supply chain. If the thesis is that a sufficiently complex system can become self-aware, the global supply chain would be my candidate. Not the Internet, whose architecture is simple by comparison.
  14. Yes, the Leibniz series is known to converge extremely slowly. Fun beginning programming project. I think he was continuing the series, not any previous conversation.
  15. Aha. I did help you. I am pointing out your specific error. You are not asserting the question of whether one set is an element of another. You are interpreting the left hand side as a string that specifies a set. That's your error. You're equivocating "n in N such that n is prime" as a specification, on the one hand, and as a set, on the other. That is YOUR error, not an error in logic books. The logic books DON'T make that error. If it's a string, then you have to take its Godel number to ask if it's prime. If it's a set, you don't. But you can't have it both ways at the same time.
  16. You replied before I edited that out. I actually don't think I can help. I agree with you that [math]\{n \in \mathbb N : n \text{ is prime}\} has a Gödel number, which in fact is not prime (since it will be the product of a bunch of prime powers). Aha I have a bit of insight. if [math]S[/math] is a statement in the language of set theory, let [math]G(S)[/math] be its Gödel number. Let [math]P[/math] be the set of primes. Then [math]G(\{n \in \mathbb N : n \text{ is prime}\}) \in P[/math] is a valid statement with a definite truth value, which in fact we can determine to be False. But [math]\{n \in \mathbb N : n \text{ is prime}\} \in P[/math] is NOT a valid statement, because it mixes up the metalanguage with the model. I think if you rewrite your argument, including the notation [math]G(S)[/math] whenever you intend to take some sentence's Gödel number, the error in your argument will be more clear. So I can restore my edit. I think I did help you. Let me know when you get the million. My contribution is worth at least a few bucks.
  17. Strings talking about sets are not sets. Well the set of primes is a set. But the string "n in N such that n is prime" is not a set! I agree that the string has a Gödel number, and you can assert that collections of Gödel numbers are sets, but I don't think you can ask if "n in N such that n is prime" is an element of the set of primes. Well yes all the statements have Gödel numbers but I don't think it works this way. I'm out of my depth on Gödel numbering here so maybe you can explain why you think it works this way. I hope you get the million dollars. I confess that I'm annoyed at myself that I can't say exactly why your argument is wrong. If only I'd worked harder studying Nagel and Newman back in the day.
  18. Looking at your paper, I see some issues. 1. What is a "set definition?" Do you mean a predicate? But predicates don't define sets, that's Russell's paradox. So what do you mean by a set definition? 2. You have the expression [math]X \notin S(X)[/math]. But [math]X[/math] is a "set definition," whatever that is, and NOT a set; and [math]S(X)[/math] is a set. So your expression makes no sense, it's neither true nor false. 3. Likewise [math]M \in S(M)[/math], has the same problem. [math]M[/math] is a set definition and not a set; and [math]S(M)[/math] is a set. Perhaps you can clarify, starting with what you mean by a "set definition." Or perhaps a set definition is a specification like [math]\{n \in \mathbb N : n \text{ is prime} \}[/math]. But in that case a set definition is a piece of syntax, a string in the formal language. It can't be an element of some set that the language is talking about. You're mixing syntax with semantics, formal strings with models. It's a plot by the publishers to sell more books. Or: Have you seen viXra lately?
  19. This popped up on my radar this morning. "Finding primes using a parabola." Looks like it might be of interest to you. https://www.cantorsparadise.com/finding-primes-using-a-parabola-a939c6e8be53 First time I clicked on it the story appeared, then after that it said I needed a paid app. Not sure how this works but perhaps you'll be able to view it.
  20. You have not demonstrated this relationship for 5, 17, and 85. The best you can do is compute the sector area for 5 and 17 and then adjust the spiral constants after the fact. You'd then have to pick different constants for every pair of numbers. And you could do exactly the same for 4, 6, 24; or 8,10,80. First, the primality of the first two numbers isn't needed; and secondly, for each pair of radial lengths you need to adjust your spiral constant after the fact. Don't you see this? What are the spiral constants for 5 and 17?
  21. It doesn't have any truth OR any falsity. You haven't stated any clear proposition that would have a definite truth value. You haven't said why radial lengths of 5 and 17 are meaningful. I could do the same exercise with lengths of 4 and 6. But I don't mean to argue with you. I learned a little about the log spiral and perhaps helped you to clarify some of your thinking, or not, as the case may be.
  22. That's exactly what I did earlier when I computed the sector area between radial segments of lengths 5 and 17. That's when you asked me why I didn't account for the angle. It's because the formula doesn't involve the angle, because given the radial lengths, you can compute the angle. That's this post: https://www.scienceforums.net/topic/124453-simple-yet-interesting/?do=findComment&comment=1178970 Oh I see what you mean. I interpreted [math]r(\varphi)^2[/math] as functional notation. [math]r(\varphi)^2[/math] is the square of the radius associated with the angle. I didn't take it as multiplication. It doesn't make any sense to square an angle. I see your point, the notation's a little ambiguous but I think they mean to input the angle to obtain the radius, then square that radius. I could be wrong. Either way it doesn't matter. There's nothing about primes or semiprimes about any of this. You pick two angles and get a pair of radial lengths and compute the sector area, but it doesn't mean anything with regard to primes or semiprimes.
  23. The primality of 5 and 17 have nothing to do with anything, nor have you reached or stated any conclusion. You take any two angles and you will have radial segments with a sector area between them as computed by the formula on Wiki. It doesn't seem to mean anything. What happened to the radial segment of length 5? What point on the spiral corresponds to a radius of length 17? If the spiral constants are [math]a = 1, k = 1[/math], the spiral is given by [math]r = e^\varphi[/math]. What value of [math]\varphi[/math] gives a radius of 17? [math]\varphi = \log 17[/math], right? That's the angle, in radians. It comes out to around 161 degrees. [math]\log 17 \approx 2.8][/math], and 2.8 radians is around 160 degrees. That's the angle it would make with the line from the origin to (1,0) if you plug in [math]\varphi = 0[/math]. How are you making your calculation?
  24. It's the sector area formula from wiki. https://en.wikipedia.org/wiki/Logarithmic_spiral#Properties
  25. 17 is supposed to be a radial length. What is 0 here? Can you clarify? What you wrote doesn't seem to make sense in the context of the problem. Did you mean the angle between radial lengths of 5 and 17? Well if the spiral is [math]r = e^\varphi[/math] then the angles are [math]\log 5[/math] and [math]\log 17[/math], respectively, [math]\log[/math] meaning natural log. Is that what you're doing? From the sector area formula on Wiki, the sector area between radial segments of length log 5 and log 17 is (17^2 - 5^2)/4 = 264/4 = 66. That's with spiral constants a = 1 and k = 1. You could adjust k to make the area come out to 85, but I'm not sure what point is being made. Have you looked at the Wiki page on the log spiral? Your log spiral changed into a parabola? As Galois said as he was frantically writing up his mathematical discoveries the night before he was to die in a duel, "I have no time. I have no time."
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.