Jump to content

wtf

Senior Members
  • Posts

    830
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by wtf

  1. Not at all. I acknowledge letting my enthusiasm exceed my concentration, discipline, and ability to make this a priority. Your expectations were reasonable and your disappointment perfectly understandable. I simply want to get us both out of that loop. No expectations. I do want to get back to your posts. I'm stuck on one piece of notation as I'll get to. I seek only to go slowly. I've always been interested in this material. Just no expectations about my pace, which might be arbitrarily small but not zero. What struck me was the depth of the detailed examples and calculations he did. He has fifty pages of explicit calculations of various manifolds before he defines the differentiable structure. So you and I are flying way way above the territory. The good news is I can undertand things in there. The bad news is I will never have the storehouse of examples he gives. I'm not sure where you're suppose to learn those. Writing clear exposition is challenging for everyone. I was thinking some of the engineering-oriented people here probably know. Studiot perhaps. I can definitely imagine associating a tensor at each point of a manifold, by analogy with a vector field. I know what vector fields are. Fine by me. I'm really curious about the [math]\frac{df}{da}[/math] notation. That's the exact spot I got stuck. Is that a typo or a feature? Would you write [math]\frac{df}{d\pi}[/math]? ps -- I am finding this article most enlightening. https://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors. I'm going to work through it. It explains why some components of the tensor are vectors and others covectors. It depends on which way the coordinates of a vector transform under a change of basis. Contravariant and covariant. This is a big piece of the puzzle.
  2. I take this to heart and plead guilty. As my philosophy prof once said: The spirit is willing but the flesh is weak. I have the math skills but my interest is drifting. The good news is that your posts enabled me to read parts of Spivak (*) and reading Spivak enabled me to understand parts of your posts. Learning has taken place and this has been valuable. You've moved me from point A to point B and I am appreciative. I have not given up. I'm going far more slowly than I thought I would. I'll post specific questions if I have any. For the record you have no obligation to post anything. I regret encouraging any expectations that have led to disappointment. No one is more disappointed than me. (*) Michael Spivak, A Comprehensive Introduction to Differential Geometry, Volume I, Third Edition. PDF here. Now, all that said ... I have four specific comments, all peripheral to the main line of your exposition. Regarding the main line of your exposition, I pretty much understand all of it, but not well enough to turn it around and say something meaningful in response. The concepts are in my head but can't yet get back out. You should not be discouraged by that. Your words are making a difference. Question 1) Definition of differentiable structure on [math]U[/math] You wrote: First I stipulate that this issue is unimportant and if we never reach agreement on it, I'm fine with that. However this remark was in response to my pointing out that you need the map [math]f \varphi^{-1} : \varphi(U) \to \mathbb R[/math] to be differentiable order to define the differentiability of [math]f[/math]. It's the only possible thing that can make sense. And yes of course by [math]f \varphi^{-1}[/math] I mean [math]f \circ \varphi^{-1}[/math], sorry if that wasn't clear earlier. For whatever reason you seem to have forgotten this. It's true that we think of [math]U[/math] as having a differentiable structure. But we have to define it as I've indicated. I verified this in Spivak. Homeomorphism can't be enough because there's no differentiability on an arbitrary manifold till we induce it. Your not agreeing with this puzzles me. And your specific response about differentiable implying continuous doesn't apply to that at all. As I say no matter on this issue but wanted to register my puzzlement. * Question 2) Definition of [math]C^1[/math] In response to this issue you wrote: I apologize but you are still not clear. What does subsume mean? You can't mean subset, because the inclusions go the other way. If a function is [math]n[/math]-times continuously differentiable then it's certainly [math]n-1[/math]-times. So [math]C^n \subset C^{n-1}[/math]. So subsume doesn't mean subset. Of course it does mean that a [math]C^n[/math] function is conntinuous. Differentiable functions are continuous, we all agree on that (is this what you were saying earlier?) So you are saying that a [math]C^n[/math] function must be continous. Agreed, of course. That's "subsumed." However this seems to be missing the point. The point is that there exists a differentiable function whose derivative is not continuous. Therefore it's not good enough to say that [math]C^1[/math] is all the differentiable functions. It's all the differentiable functions whose derivative is continuous. There's no way I can fit "subsumes" into this. Again like I say, trivial point, not important, we can move on. But I wanted to be as clear as I could about my own understanding, since like any beginner I must be picky. * Question 3) The notation [math]\frac{df}{da}[/math] Earlier you wrote: I have never seen this notation. [math]a[/math] is a constant. I asked about this earlier and did not understand your response. If [math]a = \pi[/math] would you write [math]\frac{df}{d\pi}[/math]? I would write [math]\frac{df(a)}{dx}[/math] or [math]\frac{df}{dx}(a)[/math], which you seem to think are radically different. Or even [math]\frac{df}{dx} \bigg\rvert_{x = a}[/math]. I'm confused on this minor point of notation. * Question 4) The real thing I want to know After glancing through Spivak I realized that I am never going to know much about differential geometry. Perhaps looking at Spivak was a mistake I'm trying to refocus my search for the clue or explanation "like I'm 5" that will relate tensors in engineering, differential geometry, and relativity, to what I know about the tensor product of modules over a commutative ring in abstract algebra. What I seek, which perhaps may not be possible, is the 21 words or less -- or these days, 140 characters or less -- explanations of: - How a tensor describes the stresses on a bolt on a bridge; and - How a tensor describes the gravitational forces on a photon passing a massive body; and - Why some components of these tensors are vectors in a vector space; and why others are covectors (aka functionals) in the dual space. And I want this short and sweet so that I can understand it. Like I say, maybe an impossible dream. No royal road to tensors. Ok that is everything I know tonight.
  3. "At any point in time the state of the database will be unknown." So is randomness inherent in the thing? Or is randomness just a description of our ignorance? If you flip a coin and it lands on the ground, but you haven't looked at it yet, what are the odds it's heads? 50-50, right? Because probability does not tell us about the coin. It tells us about our state of ignorance.
  4. No lack of interest. I'm working through your posts. I've been busy with other things and you're four posts ahead of me now but I intend to catch up. However you're wrong about differentiability. If I map the graph of the Weirstrass function to the reals by vertical projection, I have a homeomorphism but no possible differentiable structure on the graph because the graph has no derivative at any point. I'll get busy on my next post (which I've drafted but not yet cleaned up) and elaborate on this point. https://en.wikipedia.org/wiki/Weierstrass_function Well never mind I'll just put this bit up here. Now the point is that if the map [math]f \varphi^{-1} : \mathbb R^n \to \mathbb R[/math] happens to be differentiable (or smooth, etc.) then we say that [math]f[/math] is differentiable. Also we need the transition maps to be smooth as well. We talked about them a while back. You can confirm all this in volume one of Spivak's DiffGeo book. I'll add that working through your posts has enabled me to make sense of parts of Spivak; and reading parts of Spivak has enabled me to make sense of your posts. So I am making progress and finding this valuable. You need to define differentiability this way. Mere homeomorphism is not enough, surely you agree with this point but perhaps forgot? Plenty of continuous functions aren't differentiable. Remember that almost all continuous functions are just like the Weirstrass function, differentiable nowhere. Likewise your definition of [math]C^1[/math] is wrong, you need the function to be continuously differentiable and not just differentiable. There are functions that are differentiable but whose derivative is not continuous, and such functions are not [math]C^1[/math]. It is of course my curse in life that my ability to be picky and precise exceeds my ability to understand math, and I'm right about these two points despite being ignorant of differential geometry. I will see if I can focus some attention this week on catching up with your last four posts. "PS I do wish that members would not ask questions where either they they are not equipped to understand the answers, or have no real interest in the subject they raise ..." Sorry was that for me? I'm paddling as fast as I can. If it's for someone else, personally I welcome any and all posts. This isn't the Royal Society and I'm sure I for one would benefit from trying to understand and respond to any questions about this material at any level.
  5. I think you are claiming two separate things, one much less believable than the other. One, that 1 + 1 = 2 in the absence of any sentient minds in the universe to be aware of this fact. That is a Platonist position, but when I pretend to completely deny it, a little voice in me sort of believes it's true. I'm 5% Platonist. On the other hand you are also claiming that some particular axiom system exists in the absence of a sentient mind. That seems to me less likely. I think we might make the fair distinction that even if 1 + 1 = 2 is a truth about the universe, the Peano axioms or the ZFC axioms of set theory are historically contingent works of man. But twice now you have said that it's the axiom system that makes 1 + 1 true. But if you think about it, the axiom system is just a human invention to symbolically model what's already true in the universe, that 1 + 1 = 2. Do you agree with me that axiom systems are contingent? And not ontologically on a par with the fact that 1 + 1 = 2? Yes well it's the same point again. There's the Platonic physics, the true laws of nature. And there's human physics, from Aristotle to Newton and Einstein. Historically contingent physics, a mere approximation to the "true" physics. And what makes you so certain there is any such thing as a true physics? Maybe it's all random and we just make the patterns up, like seeing constellations in the stars? You see you are making an assumption and you don't quite see that you are making an assumption. If you had some formal system where the symbols 2 and 3 weren't necessarily unequal, then that's how it would be. If your system didn't have a rule that made them different, then in the formal system they would not be different. Is that what you're trying to say? It doesn't seem very significant. If you define the symbols differently they mean something different. And such a symbolic system wouldn't be very useful, because we're trying to model our intuition that 2 and 3 are different. Now as far as WHY we have this intuition? Well a Platonist would say that it's because 2 and 3 really are different in the world. A non-Platonist would say, well I guess we just made it up. It's just as hard to be a non-Platonist as it is to be a Platonist!
  6. Where do these axioms exist? I've seen many people argue that 1 + 1 = 2 in the absence of minds. But I don't think I've ever seen anyone claim that axiomatic systems themselves exist in the absence of people. You are quite the Platonist. I am afraid I'm not that much of a believer. Different people believe different things about what's out there in some imaginary realm of nonphysical things. The natural numbers, the baby Jesus, the Flying Spaghetti Monster. That's my problem with Platonism. Once you start believing in the existence outside the mind of things that do not exist in the physical world ... where exactly do those things exist, and what else is living there? Yes if you define a play as something written down and acted out on a stage. But math in that sense doesn't exist till a professor writes it down and gets it accepted in a journal. You claim the math already existed and the play didn't. That's a matter of opinion, not fact. And there are so many levels of this philosophical confusion. Before Wiles proved Fermat's last theorem, was FLT true? Even if a Platonist says that the theorem was ALWAYS true, did Wiles's particular proof already exist? Isn't the proof actually a physical thing written on paper by a human? It's no different than one of Shakespeare's plays. "A proof by any other name ..." Of course my philosophical opinions are my own, so it's better if you ask me about the math. Everyone's got philosophical opinions. I can not exactly see what point you are trying to make. Can you state your thesis more clearly? Surely if you define the symbols differently you can make 2 and 3 mean the same thing. I don't see your core point here. As far as the complex numbers, there is no order on them compatible with their arithmetic. That's a theorem. When you jump up another level to the quaternions you lose commutativity, and when you go up to the octonions you even lose associativity. I'm not sure I believe there's any deep philosophical meaning to any of this. Do you think there is? Besides, even though the complex numbers lose order, they have one great improvement over the reals: they're algebraically complete. So you can't say the complex numbers are defective in any way, in fact they're much better than the reals for a lot of things.
  7. Remember, the brand placebos work much better than the generic ones.
  8. ps -- As it happens there is a thread on math.stackexchange today, where someone asks how to prove that two natural numbers are equal if and only if there's a bijection between them as sets. You can see from the responses that this no trivial matter! http://math.stackexchange.com/questions/2189871/let-n-m-in-mathbbn-show-that-if-there-is-a-bijection-between-1-n-a?noredirect=1#comment4505167_2189871
  9. Personally I'm not convinced about that. Did the plays of Shakespeare exist before Shakespeare wrote them? Before there were humans on earth? Why would math or logic be different? There's ancient logic and modern logic. Modal logic, paraconsistent logic, all the modern stuff people have come up with. Where did these things exist before someone created/discovered/invented them? I don't claim to know the answer.I don't think the latest Star Wars movie existed before someone wrote the script, hired the actors, and made the movie. Math and logic too. I don't know what that means. Before Giuseppe Peano wrote down the axioms for the natural numbers, the world had no such axioms. If he'd written down different ones, history would be different. Many people believe the natural numbers have some sort of existence before there were humans. I'm not so sure. It's a matter of philosophy.
  10. A proof is a logical deduction from some axioms. If you assume the Peano axioms you a "proof in PA" as they call it, and if you assume the axioms of set theory you get "proof in ZF". I think in the present case the most fundamental proof is the one from set theory. The Peano axioms work but how do we know there is any model of them? So if you believe in the natural numbers then you can do a proof from Peano, but if you are a skeptic then you need the axiom of infinity to provide a model of them. That's a personal opinion. Most people accept PA as having some ontological referent, the "natural numbers of our intuition" or some such. In the end it's just symbolic manipulation so you always have to start by taking some statements as given and not subject to proof. So I guess you could say that [math]2 \neq 3[/math] is "obviously" true about the world. But in formal math, it's pretty arbitrary. The only reason [math]2[/math] and [math]3[/math] are different is that we define them that way. The symbol [math]3[/math] is defined as the successor of [math]2[/math], which is defined as the successor of [math]1[/math], which is the successor of [math]0[/math]. None of it means anything at all, it's just a symbolic game. That's the downside of formalization, you lose touch with reality. A child knows what it takes a logician years to prove.
  11. Yes that works too, directly from the order properties of the natural numbers or the reals. That's yet another proof.
  12. In set theory [math]2 = \{0,1\}[/math] and [math]3 = \{0,1,2\}[/math] and these are distinct sets by the axiom of extensionality. https://en.wikipedia.org/wiki/Axiom_of_extensionality On the other hand in the Peano axioms, [math]2 \neq 3[/math] since two numbers are the same if and only if they are both successors of the same number; but [math]2 = S(1)[/math] and [math]3 = S(2)[/math]. https://en.wikipedia.org/wiki/Peano_axioms Set theory and the Peano axioms are related by the Axiom of Infinity, which says (in effect) that there is a set that's a model of the Peano axioms. https://en.wikipedia.org/wiki/Axiom_of_infinity
  13. I don't see why not. I could explain integration and infinite series in pictures. Integration's adding up little rectangles under a curve, and series are just little building blocks on top of the number line that add up to a finite number. In fact infinite series are actually just a special case of integration, that's the trick. What are Calc II and III by the way, everyone has different defs. Are you including second year calc? Multivariable calc, DiffEq, and linear algebra? All of the concepts are simple, it's the detail work that's hard and that provides real understanding. But I don't need to study relativity to imagine a bowling ball on a rubber sheet, and even though that's not gravity, it's an analogy that's close enough for the general public. You could definitely do calculus in stories and pictures. You wouldn't be ready to be an engineer, but you'd be a more educated layman. You'd know the difference between the deficit and the debt. It's the relationship between the derivative and the integral. The way the annual deficit accumulates into the national debt is the fundamental theorem of calculus. If people understood that they'd stop letting the government bs them about the federal budget.
  14. Ok now that I'm going through this I'm completely confused by where all this is taking place. We don't know how to take derivatives on a manifold yet but your notation is assuming that we can. Picky refinement, my understanding is that a [math]C^1[/math] function has a continuous derivative. There are functions with derivatives that fail to be continuous on one or more (even infinitely many) points. http://math.stackexchange.com/questions/292275/discontinuous-derivative/292380#292380 More pickiness, this is a trivial point but of course you mean for all positive integer orders. Then it's no longer a function of someone's imagination. I was thinking fractional derivatives, who knows what else. Ok here is an expositional problem that confuses me. This is not pickiness, I'm genuinely confused. We've been letting [math]M[/math] stand for a manifold. But we don't know how to differentiate a function on a manifold. In fact you said that the charts are only homeomorphisms, so for all we know our manifold [math]M[/math] is so full of corners it can't be differentiated at all. In order to get past this point I have to either assume we've defined differentiability on a manifold somehow, or else that we're working in [math]\mathbb R^n[/math]. I hope you will clarify this point. Little point of notational confusion. I'd believe [math]\frac{df}{dx}\biggr\rvert_{x=a}[/math] or [math]\frac{df}{dx}(a)[/math] but I'm not sure about your notation. Is that a typo or a standard notation? Yes. Now you see I have the [math]M[/math] problem in spades. I see you talking about tangent vectors to a point on a manifold but I have no idea how to define differentiability on a manifold. Rather than look it up I thought I'd just ask. Of course if we're in [math]\mathbb R^n[/math] this is clear. This is still an interesting point of view even if I imagine that we are talking about Euclidean space and not manifolds. We're fixing a point and letting the functions vary. If we are in single-variable calculus, we can let [math]x = 1[/math] for example, and then [math]\frac{df}{dx}(1) : C^\infty_1 \to \mathbb R[/math] is a function that inputs [math]x^2[/math] and outputs [math]2[/math], inputs [math]x^3[/math] and outputs [math]3[/math], inputs [math]e^x[/math] and outputs [math]e[/math], and so forth. You see I'm still bothered by your notation. Did you really want me to write [math]\frac{df}{d1}[/math] as you indicated earlier? I have a hard time believing that but I'll wait for your verdict. It's clear to me that by the linearity of the derivative, [math]\frac{df}{dx}(1)[/math] is a linear functional on [math]C^\infty_1[/math]. But the domain is the real numbers, not some arbitrary one-dimensional manifold that I don't know how to take derivatives on. For one thing don't we need an algebraic and metric structure of some sort so that we can add and subtract vectors and take limits? So I do sort of see where you're going with this. But I'm totally confused about how we lift the differentiable structure of [math]\mathbb R^n[/math] to [math]M[/math]. Ok I believe this. Or maybe not. First, you are using those set brackets again and I do not for the life of me see how that can make any sense. There's no order to sets so how do you know which coordinate function goes with which coordinate? Secondly of course there is the manifold problem again, I don't know how to define a differentiable function on a manifold. Now if I forget manifolds and pretend we're in [math]\mathbb R^n[/math] then I suppose we could define the functional [math]v=\frac{\partial}{\partial x^1}(m)f + \frac{\partial}{\partial x^2}(m)f+....+\frac{\partial}{\partial x^n}(m)f[/math]. I would almost believe this notation as I have written it. This particular functional is defined at the point [math]m[/math]. However I see that you've left that part out and you're defining this functional for all points? But then it's not defined correctly. I don't know what is the input to the functional. Can you clarify please? Well like I say it's more or less clear what you're thinking but I'm lost n the points I've indicated. ps -- Ah ... slight glimmer ... since [math]m[/math] itself has coordinates, we can break up the partials as acting on each coordinate separately, and we'll end up with some Kronecker-fu leading to the rest of your exposition. Is that the right intuition? I'll push on. (Later edit) ... I can see a way to define differentiability. If [math]M[/math] is a manifold and [math]U \subset M[/math] is an open set, and if [math]\varphi : U \to \mathbb R^n[/math] is a chart, and [math]f : U \to \mathbb R[/math] is a function, then we would naturally look at [math]f \varphi^{-1} : \varphi(U) \to \mathbb R[/math]. If [math]f \varphi^{-1}[/math] is smooth then (since [math]\varphi(U) \subset \mathbb R^n[/math]) we can take the partials with respect to the coordinate functions and then I think the rest of your notation works. Is that right?
  15. You're two posts ahead of me FYI. I haven't worked through the earlier one yet. Been a little busy with other things. I'm thinking that you are intending this remark as a response to my questions about why dual spaces creep into tensor products, but I don't think you are understanding my question then. Of course I understand what dual spaces are. But in the algebraic definition of tensor products, duals NEVER show up; while in diffGeo/physics discussions, they ALWAYS show up. That's the gap I'm trying to bridge. Apparently no algebraist has ever set foot in the same room as a differential geometer, else there would be a clear and simple explanation of this expositional mismatch somewhere. I hope to get through the earlier post today or tomorrow or the day after.
  16. <Star Trek computer voice> Working ... Actually I read through it and it looks pretty straightforward. I'll work through it step by step but I didn't see anything I didn't understand. The tangent space is an n-dimensional vector space spanned by the partials. I understand that, I just need practice with the symbology. I see at the end you bring in the Kronecker delta. This is something I'm familiar with as a notational shorthand in algebra. I've heard that it's a tensor but at the moment I don't understand why. I can see that by the time I work through your post I'll understand that. This seems like a fruitful direction for me at least.
  17. Oh I must disagree with this point. I can write a program -- frankly this would not be unsuitable as a beginning programming exercise after the basic syntax and concepts of programming are learned. The program reads in 10 years of daily temperature data from, say, New York City. The program then does statistical analysis (since this is a beginner exercise we will supply the needed statistical routines) and then emits the following prediction: Next year the average temp in July will be higher than the average temp in January. Has the program "learned?" Well if you say so, but I say no. It's only applied a simple deterministic statistical test. And more importantly, the program does not know what a temperature is, or what July is, or where or what New York City is. It's jut flipping bits in deterministic accord with an algorithm provided by a human being. The computer does not know the meaning of what it's doing. Now if you learn a little about machine learning, you will find that the students are buried in statistical analysis and linear algebra. That's all it is. Every bit flip is 100% determined by algorithms. And when a human learns, they are not multiplying matrices in the cells of their brain. This is nothing to do with learning as the term is commonly understood. Only by naming the subject "machine learning" can proponents of strong AI fool the public into thinking that machines learn. It's programming 101 to read in data, categorize and statistically analyze it, and output a prediction. Ok to be fair, programming 102. A week's work for a student, a few hours for a professional. A nothingburger of a program. You might argue that people are doing the same. You have no evidence for that though.
  18. I'd like to present a meta-argument to the contrary. When water was the big technology back in ancient Greece and Rome, the mind was described as a flowing phenomenon. The word nous for soul has the same root as pneumatic according to one article I read. After Newton, everyone got interested in mechanical devices and people thought the mind and the universe were mechanical in nature. Now we're in the age of computers and everyone is all, "Oh of course the mind is a computer. The universe too. Why we'll soon upload ourselves to heaven I mean the computer." Funny how upload theory sounds just like Christian theology. So the meta-argument is that we always think the mind is whatever the hot technology of the day is. When the next big thing comes along we'll think it explains the mind and the universe as well. History shows that. By that argument it's highly unlikely that the mind is a computer as computers are currently understood.
  19. I'm perfectly happy to have some "character building opportunities" as they say Partial differentiate away. No hurry on anything. ps -- In case I'm being too oblique ... just write whatever you want and I'll work through it.
  20. This got a little rambly ... my code must have been in a loop ... in a loop ... in a loop ... I reread your original question and you asked if programs can build models. Of course. I can program a computer to analyze 100 years of temperature data; do a statistical correlation of the temps against the months; and output the prediction, "July will be warmer than December this year." That's easy and commonplace. The Go program that beat the human expert -- an astonishing achievement for weak AI -- was programmed to play millions of games against itself and draw stistical inferences about which moves were more likely to result in victory. But do you think that's all we do? Creativity consists in knowing everything there is to know about the statistics ... and seeing that in this particular instance, the right move is wrong. An AI painter knows that schlock sells. We'd get big-eyed children and poker-playing dogs for the rest of our lives. How do you make an AI Picasso? You can program a computer to paint LIKE Picasso ... but you can not program a computer to create the next revolution in art. Or math, or anything. If there is one thing computers do, it's the same exact thing, over and over. They're algorithms. They can't grow new capabilities. Creativity. The ability to know what's right regardless of the statistical properties of the domain. Intentionality. The ability to know what programs are "about." The self-driving car does not know it's driving a car. It's only flipping bits. It's the human that knows the algorithm is driving a car. Consciousness. "The hard problem" as David Chalmers calls it. I'm conscious and a box of wires isn't. How am I so sure you ask? I'm not As computer scientist and awesome blogger Scott Aaronson would say, I am an unreconstructed meat chauvinist. I do take your point that I have no logical basis for my meat-centric beliefs. http://www.scottaaronson.com/ [i linked Aaronson in case people aren't familiar with his awesome site. CS theory, quantum computing, and way more. He has a series called Quantum Computing Since Democritus and if you simply read through it you automatically become smarter.] Personally I think that whatever the next stage of the evolution of intelligence is, it won't be a computer. I don't think we're the last work but I can't imagine algorithm being that clever. I do not believe I'm an algorithm. I don't doubt that there's a next stage of evolution. Personally I worry more about human's placing too much faith in machines. That's the danger, not the machines themselves. This is the problem of other minds. I assume you're conscious. I assume my neighbor is concience even though I never talk to him. How do I know anyone is conscious? How would I know if a machine is conscious? It seems hopeless. Consciousness is subjective. We have no objective test for it. Yet. But all programs are already like that. When an experienced programmer writes a system of 10,000 or 100,000 lines of code, he no longer remembers how it works. The accounting programs at banks were writting in the 1960's by COBOL programmers long since retired. Many of those programs have been patched and extended over the years by generations of programmers. Nobody at the bank knows how any of their software works, they just make changes and try not to break things. That's actually the nature of the programming business ... today! Nobody knows how any of this software works. That's a reason NOT to trust computers with our lives. It's the nature of all code that it's so complicated nobody understands it. All code is like that. Code encapsulates huge amounts of complexity. Always has. "Survivor" pitting humans against bots. I think I'm going to pitch this to the TV networks! If it did it would be the fault of the programmers. Just like some guy typed in the wrong command yesterday and brought down Amazon Web Services. A typo took down a big chunk of the internet. It's always human error. The infuriating thing about programming is that the computer always does exactly what you tell it to. http://www.usatoday.com/story/tech/news/2017/03/02/mystery-solved-typo-took-down-big-chunk-web-tuesday/98645754/
  21. Every algorithm and programming technique, including machine learning and neural networks, reduces to a Turing machine. By substrate independence (aka multiple realizability) we know that whatever it is a program computes is independent of the speed or nature of the hardware. Conclusion: If intelligence is emergent, then it's not computational. And vice versa. Let me say that again, since it's both true and notably absent from virtually every discussion of this topic. If intelligence is emergent, it's not computational. And if it's computational, it's not emergent. In other words if a fancy neural network is intelligent, then the same algorithm is intelligent when it's being executed by a clerk following the instructions with pencil and paper. If an algorithm is intelligent, that intelligence is present when it's running one instruction at a time on the world's slowest processor. If it's not intelligent, speeding it up or running it on a future supercomputer can not make it intelligent. As a concrete example, consider the Euclidean algorithm for determining the greatest common divisor of two integers. Whether that algorithm is performed by Euclid using a stick to draw numbers in the sand; or whether it's coded up on a supercomputer; it only does that one thing. Running Euclid's algorithm faster doesn't make it suddenly know how to drive a car. It's been shown that quantum computers have the exact same computational power as standard Turing machines. So although a quantum computer can do some specialized tasks faster than a conventional computer, the set of problems that can be solved by quantum computers is exactly the same as the set of problems that can be solved by traditional computers. There is much hype in the AI business. Absolutely no results whatsoever in strong AI since the hype got started in the 1960's. Weak AI is driving cars and playing chess. Very impressive feats of programming in highly constrained problem domains. But weak AI is not general intelligence nor is it a step in that direction. https://en.wikipedia.org/wiki/Weak_AI
  22. My thoughts? Sadly although I can talk about the abstract theory of integration, specific integrals make my eyes glaze over. I couldn't do those problems even when I TA'd that class! I have no opinion about anything involving techniques of integration. Pretty much everyone on this site knows that material better than I do. When mathematic says ... ... I am in no position to disagree. Sounds about right. (Edit) That said, perhaps I'm being overly modest. I recognize the math you did. It seems like a standard exercise in basic integral calculus. Can you say what it is you find significant about it?
  23. I looked at your paper, your website, your Facebook, and your resume. You are a very interesting guy. I love your art. You are using everyday language in unusual ways, making it hard to understand your train of thought. It's interesting to say the least but there does not seem to be any intention to connect with your readers. Can you express your idea in everyday language?
  24. Just want to point out that the Turing test is not very good. Ironically its weakness is the humans. Any halfway decent chatbot is rated as intelligent by humans. That's why that dumb Eugene Goostman chatbot allegedly passed the Turing test. http://www.smh.com.au/digital-life/digital-life-news/turing-test-what-eugene-said-and-why-it-fooled-the-judges-20140610-zs3hp.html When Joseph Weizenbaum invented Eliza, he intended it to be a demonstration of how dumb computers actually are, even when simulating intelligence. He was shocked to find out that people would start telling it their most intimate thoughts in the delusion that they were speaking to a real therapist. https://en.wikipedia.org/wiki/Joseph_Weizenbaum
  25. Thanks Xerxes for all your patience. That is actually my interest too so this direction is perfect for me. My goal is to understand tensors in differential geometry and relativity at a very simple level, but sufficient to understand the connection between them and the tensor product as defined in abstract algebra. In fact lately I've been finding DiffGeo texts online and flipping to their discussion of tensors. Sometimes it's similar to what I've seen and other times it's different. It's all vaguely related but I think it will all come together for me if I can see an actual tensor in action. And if it's the famous metric tensor of relativity, I'll learn some physics too. That's a great agenda. That's what I meant the other day when I said I hoped we didn't have to slog through the calculus part. I don't want to have to do matrices of partial derivatives and the implicit function theorem and all that jazz, even if it's the heart of the subject. I just want to know what the metric tensor in relativity is and be able to relate it to the tensor product. Partial derivatives make my eyes glaze over even though I've taken multivariable calculus and could explain and compute them if I had to. Along the way, maybe I'll figure out where the duals come from. Because with or without the duals you get the same tensor product; but the duals are regarded as important in relativity. That's the part I'm missing ... why we care about the duals when they're not needed in the definition of tensor product. Was that collision between the glass container and your skull? Or of the wine molecules with your brain cells? Or did you use the latter to mitigate the effects of the former?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.