Jump to content

Hrvoje1

Senior Members
  • Posts

    145
  • Joined

  • Last visited

Everything posted by Hrvoje1

  1. I was watching this video on youtube https://youtu.be/ig380wp10aQ?t=111 in which Gary Kasparov says that machines revealed so many secrets, and magic or mysteries of the game of chess are gone because you could see it through the lenses of computer and even an amateur can actually understand immediately what is happening at the chessboard thanks to the machine’s advice. There is another video that I cannot find anymore in which he is more specific and says that engines can explain what’s going on. And he is right of course, in the context of chess, every explanation is expressible first and foremost in the language of moves, which engines do speak, however, besides that, human mind tends to reason abstractly about it, create concepts expressible in natural language, mastering of which is something that people primarily refer to when they speak about “understanding chess”, otherwise everyone can understand if he or she is losing consistently, that is kind of obvious directly from the moves, but why exactly this happens requires another kind of explanation, in terms of these abstract concepts, which should be able to be illustrated concretely in the language of moves at the same time, for them to be valid and teachable. And this abstract reasoning comes natural to people, so that even young children are able to create for themselves some of these concepts without ever being taught, within mere hours of playing. Two of such concepts are material and the value of pieces, they immediately understand that having more is generally speaking better than having less of it, that is almost an instinct, which they feel as a frustration when they lose material, giving something without gaining anything in return, while for example the concept of sacrifice is advanced and has to be learned, ie acquired after some more experience with the game, as it involves further concepts, those that are exchanged for material. Anyway, as chess engines do not speak natural language, and are mainly agnostic about human abstract concepts, at least those modern, self taught ones (in the sense that these concepts are not built into them), there is a gap that we need to overcome if we want to translate their knowledge to something that is comprehensible to us, besides the moves, that we clearly see are superior to those that human player is able to produce. And there comes into play a software company like Decodea, their mission is to produce such translators, for various domains of human knowledge, the one for chess they have named DecodeChess. I investigated it a bit, by watching this video: https://youtu.be/-JpQEByxpzY Obviously, when I speak about chess engines, I speak in terms of standard chess software architecture that is not monolithic, it identifies three main parts, an engine, which is responsible for analysis of positions (Stockfish, LCZero...), a graphical user interface, which is a front end application that accepts user input and provides output to the user (Arena, Scid vs. PC...), and a protocol (UCI, CECP) by which these two components communicate. The engine is pluggable into the user interface if the interface supports the protocol for which the engine is written. The translator/decoder, which is a separate component, sits in between and interprets the input moves, before presenting its results via user interface, consulting in the process its own repository of human knowledge, those abstract concepts and ideas on what constitutes efficient play, matching them with data received from the engine and from human players, recognizing tactical and strategical patterns and presenting them in the form of explanations written in natural language why are the moves suggested by the engine good, and those played by human not that good, when such is the case. So when it detects that pin is created or threatened it reports that as good for the side which created it, or threatens to do so, or when it sees that an open file is taken under control it reports that as good for the side which took it, etc. That is a correct approach, and not quite a trivial task, although, the objection from the guy who talked about Nimzowitsch rules and Steinitz rules is on spot, regardless of the fact that he did not use the best term to describe what he meant (he said rules, as people often do, but he meant abstract concepts and ideas on how to play efficiently), and regardless of whether the objection still stands or not. Namely, if the machine learning process during which the translator is trained to recognize patterns is strictly supervised, unable to distil its own patterns from the data it receives from the engine, and update the previously mentioned repository with some new, inhuman knowledge, instead of just using existing for supervisory reference, then the objection still stands, because they did not upgrade it yet to that level, to enable unsupervised learning. I know it is easier said than done, but if DeepMind managed to produce MuZero, a program that not only finds out by itself how to play efficiently by the rules given, such as AlphaZero does too, but it also finds out by itself what are the rules of the game in the first place given the chance to play, I don’t see why Decodea would not be able to produce an enhanced decoder that would actually be able to extract new abstract chess knowledge by analyzing engine’s play and teach even human grandmasters some new abstract concepts and ideas, that seems like a comparable effort to me. I don’t know if I got it right, but from my layman point of view, the principal difference between AlphaZero and MuZero is that the former one has a built in legal_move_generator function and a function recognize_terminal_game_state (mate, stalemate, draw by insufficient material, draw by repetition...) which means it knows the rules completely in advance, prior to NN training, that serves only to enhance evaluate_position function, while the latter one utilizes NN training to build from scratch the first two functions, as well as to enhance the third. Actually, this is not right distinction, as the starting point for all three functions is zero knowledge, ie random NN weights, the important difference between these functions is that the first two can be learned perfectly, while for the third, the law of diminishing returns applies with respect to the number of NN training games, and possibly with respect to growing NN topology. Does it mean that the game rules should be somehow extractable from MuZero NNs into a human understandable code? Can the same thing be done with the knowledge of evaluating positions, and “decoded” into natural language? Of course, there is a possibility that there is no such new abstract concept unknown to human, and the only reason why computers can play better is because they can apply more consistently the concrete ideas which present concretization of abstract ideas already known to human. And of course, computers are able to show new concrete ideas even to the best informed, most knowledgeable human players and they do that all the time, thanks to their superior capability to explore the game tree, which is vast, but in order to do just that, engines are sufficient, no need for decoders. Unfortunately the current situation with Decodea decoder is still slightly worse, and even the initial intended more modest result of explaining the moves in terms of already known concepts is not yet fully achieved, let alone something more. I can compare the state of the art with the translation from English to Croatian by using Google Translate: the original English text is much more understandable to me than the produced Croatian text, and I am a native Croatian speaker. That is not helpful at all, except maybe to some native Croatian speakers who do not speak a word of English. They might gain at least a certain clue what the text is about, but to me it is actually confusing and annoying. Let me illustrate my comparison with a second example of analyzed position in the same video, there is a summary that explains why is Nb4 a good move, this is because it: threatens to play Nc2+ enables Bxf3+ allows playing Bxf3+ and prevents playing Qxf6 lures the white pawn to f3 and steps into a dangerous place As none of this makes any sense if one fails to see that 17...Nb4 is actually an indirect checkmate threat, which does not allow 18.Qxf6 because of a forcing line 18...Nc2+ 19.Ke2 Ba6+ 20.c4 Bxc4#, and there is no better alternative to 18.cxb4, for example 18.Be2 Nc2+ 19.Kf1 Qxg5 20.Nxg5 Nxa1 is worse, and the commentary presented in the video does not explain that, I was not lazy and I opened an account at DecodeChess, and there I opened the same example myself in order to see if I can get that analysis by expanding the hidden text (pressing the yellow plus sign button on the right). And I cannot see that panel properly, all content, for some reason, but it seems that these lines are there, strangely scattered, in not particularly concise way as I presented them. And the text that is visible without expanding hidden panes, does not explain properly in human way the tactical idea of that complex combination: due to previously explained reasons black can make a clearing sacrifice of the knight on b4 (clears the path for his bishop), decoying sacrifice of the rook on d1 (lures the opponent’s king to that dangerous square), after which exchange of the bishop for the knight on f3 comes with check, and at the same time it removes the only white’s queen defender, so that black can pick it up in the next move, with a net material gain of queen for rook and knight, which in this position should be comfortable advantage for black. The combination is actually even longer, I did not mention the exchange of one pair of rooks on d1 in the middle of it. Yes it is all there, recognized, and somehow mentioned, but not in a sufficiently succinct way, and the sentence “lures the white pawn to f3 and steps into a dangerous place” sounds more silly than explanatory. Credit is due to what’s been done, I hope it will get better, and I also hope constructive criticism could be accepted. But the main problem stays: if it can only explain things that I already understand, and things I could grasp by my self from direct communication with an engine, in that case that translator or decoder does not fully meet its purpose. The development of chess engines is many years ahead of development of chess knowledge decoders due to several reasons, primary one being the fact that there is a large and vibrant community of chess engine developers, that organize chess engine competitions, with occasional inclusion of biggest players such as IBM and Google, while Decodea is not accompanied or challenged by a large community of active developers researching the same area, which is a pity, because what they do is as much important and exciting. An initiative by Herik, Herschberg, Marsland, Newborn and Schaeffer to establish a competition whose objective was to produce the best chess annotation software possible, died after a couple of years, regretfully. That was The Best Annotation Award, an annual contest described here: https://pure.uvt.nl/ws/portalfiles/portal/1239682/BEST____.PDF , if it was alive DecodeChess would be one of the main competitors there. What constitutes a proper chess commentary was theorized in depth by David Levy, Ivan Bratko, Matej Guid, to name just a few among many others. The caveman approach to implementing that functionality in a computer program would be to read the input, not yet annotated game file, iterate through its moves by submitting them sequentially to the engine that is used, and when the difference between the quality of the move played and the quality of the best move available in the position reaches certain threshold, expressed in score unit of centipawns returned by the engine, detect that as a serious mistake that requires comment and print a principal variation which is also returned by the engine, as an annotation for any such move into the output annotated game file. The typical insufficiency of such a not sophisticated approach is that it misses refutations of alternative moves that might appear appealing to a superficial human analyzer, and that were not played, as well as explanations why was important to play those that were played, if reasons are not so obvious to a superficial human analizer. So for example in the analyzed position, after 17...Nb4 was played, such a simple annotator would fail to comment 18.Qxf6 simply because the move that was actually played, 18.cxb4, was the best available at that moment. The only way in this case to get the machine’s advice on why 18.Qxf6 is bad, is to ask the engine directly, which defeats the purpose of the annotator, because it fails to explain automatically all that is relevant, even tactical ideas, let alone strategical ones. In other words, such a program completely fails because it lacks the notion of obviousness, importance, and relevance. Even DecodeChess, which is much more sophisticated program misses some of that, when it reports that 17...Nb4 “lures the white pawn to f3 and steps into a dangerous place”. In both cases the problem is that the program is too rigid in making decisions on what to comment and how. I know that explaining chess software architecture is not that important or relevant when we talk about DecodeChess, since it is an integrated product or unit, about which one doesn’t have to worry what to plug in it or what to plug it in. I explained it because I was annoyed by the fact that when I asked people online what chess annotating software would they recommend, some of them started to mention engines. Obviously it doesn’t matter if chess annotator is a standalone program with no other purpose, or if it is integrated into general purpose chess GUI, what matters is that using stronger chess engine will not solve the problem I just described, because the functionality in question is implemented in the annotator, not in the engine, so there is no point to mention them. To understand such things, one should always keep in mind the notion of separation of concerns between software components, and have a clear picture of their responsibilities. Anyway, the attempt to extract the abstract knowledge was not connected only with engines as sources or oracles of that knowledge, but with endgame tablebases too, for example https://ailab.si/matej/doc/Deriving_Concepts_and_Strategies_from_Chess_Tablebases.pdf . The subject has a lot of history, but its future is actually more interesting. And a few mentioned concepts are just a tip of the iceberg of what actually exists in that game, and then some, as one can easily imagine, considering the fact that one can practice that immensely rich game whole his life, and still not be particularly good at it. But chess is not only a game of logic and tactical and strategical planning, other factors are important, such as memory, visualization, focus or concentration. Although each chess player regardless of his or her strength has to have certain visualization capabilities in order to be able to analyze a few moves ahead, without actually moving pieces on the board (because rules do not allow that), that is immensely easier when you can look at the board. At least to an average person, not so much to a top grandmaster, but, can they explain how they acquired such an amazing skill, like being able to play blindfold? Saying that this presents a whole another level of visualization capability that although not required by standard rules, greatly helps in standard circumstances when looking at the table is allowed, doesn’t explain much how is this actually achieved. The only explanation offered by Alexander Grischuk https://youtu.be/B3SXVN6KSNc?t=1340 was that it came natural to him during his childhood, as it should to any future grandmaster, ie not a result of some conscious effort and systematic practice, while I tried to follow a couple of recipes offered by others, to no avail. So either these explanations were not good enough, or I did not follow them properly and on time, result is the same: I cannot memorize the board, just as Grischuk cannot speak Chinese although he tried to learn it. Which I know because he said that a few minutes before the moment I chose as a starting point for playing this video, when I pasted its link here. Before writing this essay, I did not know how to pass that information (Start at...) along with the link to a youtube video that would otherwise start from the beginning, at timestamp zero, and now not only I know that, but I am also fairly sure I can explain that to pretty much everyone interested, in several ways, depending on their prior knowledge. This is because explaining properly how to learn during adulthood a new language which is very different compared to your own native one, is much harder task than explaining properly how to add timestamp parameter to a youtube video link. Which is connected to the amount of information the explanation contains, and reliability of passing that information. And if we accept a “task” as a fundamental notion needed to explain nature, then “explanation” would be “information needed to accomplish a task”. Moreover, explanation is to a human the same thing as program is to a computer, instruction it can follow and execute. Of course one can argue this is just one aspect of explanation, not its full characterization, because one can follow instructions without fully understanding them. Nevertheless, following that logic, every living organism that can pass useful information, can produce explanation, it is only a matter of surpassing a communication barrier between the one who tries to explain and the one who tries to understand. Right? David Deutsch seems to disagree, here in this TED interview https://www.ted.com/talks/the_ted_interview_david_deutsch_on_the_infinite_reach_of_knowledge/transcript?language=en#t-889066 , when asked by Chris Anderson: “A lot of people would say, look, every species knows something. A dog knows that a bone tastes delicious, but it doesn't know scientific theory. We know a certain amount of scientific theory, but it's ridiculous to imagine that we could know you know, that there must be a whole world of things out there that we are never even in principle capable of understanding. And you dispute that. Why? Why?” David Deutsch replied: “I've already explained why the dog is inherently different from us. It's because the dog knows that the bone tastes good because some of its ancestors who didn't know that, died. And the dog doesn't actually know anything, its genes know that. And there are certain types of things that can become known that way. But the vast majority of things in the world, in the universe, cannot become known that way, because the dog cannot try to eat the Sun and be burned and that kind of thing.” So, a lot of vague instructions present much higher barrier to understanding than a few precise ones. Actually, a lot of precise instructions given in precise order that can be reliably memorized are much easier to get acquired and applied than just one vague instruction, but in that case one can at least focus much more easily on removing the vagueness. This is the reason why it is possible to train a dog to sit on your command, or search for drugs, or search for a missing person, but it is impossible to make conversation like Doctor Dolittle. As some of that potentially saves human lives, it is actually odd to read such comments about dogs from a person who is obvious Nobel prize candidate. This is also a reason why it is much easier to remove just one bug from the program, than several combined ones, if we consider debugging as a way of communication between human and computer, during which human tries to remove vagueness of instructions given to a computer. If bugs are isolated in their effect, then the effort to eliminate them is proportional to their number, if we assume they are equally hard to get eliminated. One such vague instruction is that in order to learn a foreign language to the level of being able to speak fluently and sound like a native speaker at the same time, one should not only study it like it is taught in school, but learn it in the way small children learn their native language. Which sounds logical, but it also requires further explanation, how exactly is that performed by an adult? I have found that insight shared on the internet by the excellent polyglot Luca Lampariello, who demontrates validness of his methods as soon as he begins to speak. He is an Italian who speaks fluently Chinese and Russian, among a dozen of other languages, but he failed with Japanese, to a certain extent, what he describes as a failure I would most probably describe as a great achievement, if I was ever able to reach that level of fluency in any foreign language. I was impressed by the longitude of explanation of that failure, and by the steps taken to improve his skill, such as sessions with another guy, Matt Bonder, an American who managed to learn Japanese fluently. So, there is a lot to know about it, and make an introspection, how did we manage to learn those languages that we speak, and those things that we know in general? Finally, there is maybe a crucial aspect of an explanation, captured by Deutsch when he says: “Well, human-type creativity is different from the creativity of the biosphere, in that human creativity can form models of the world that say not only what will happen, but why. So an explanation, for example, is something that captures an aspect of the world that is unseen. So, explanations explain the seen in terms of the unseen” This describes a scientist as someone who tries to decipher conjurer tricks performed by nature, but this is a subject for another essay, explanations in science. Although, if we assume that moves represent the seen, and abstract concepts the unseen, then we may conclude that there is no reason to make distinction between chess explanations and scientific explanations?
  2. OK, for those who didn’t quite get the concept of reversing coefficients, and what did mathguy actually mean when he said there: >>Observe also that p+q=n where p and q are the degrees of g and h respectively; then distribute the 1/xn between g and h<< let us consider another slightly different example, this time reducible: 2x2+3x+1=(x+1)(2x+1) if we multiply both sides by 1/x2 we get: 2+3x-1+x-2=(1+x-1)(2+x-1) where we had 1+1=2 for p+q=n and 1/x2 distributed between two factors, each received 1/x, now let y=x-1 => 2+3y+y2=(1+y)(2+y) what we got is reciprocal polynomial to the original, that is obviously also reducible, and has reversed coefficients. The logic is that if we started with irreducible one, such as y2+2y+2, which we know is irreducible by applying Eisenstein’s criterion, reversing coefficients gives 2x2+2x+1, the second example, which then also must be irreducible.
  3. Since my generalization failed so miserably, I decided instead of that to exploit in constructive way information gathered here. So I considered two simple examples, x2+1 and 2x2+2x+1 and realized that although I couldn’t find a prime p that satisfies Eisenstein’s criterion for them, I also couldn’t factorize them, which I double checked here: https://www.mathportal.org/calculators/polynomials-solvers/polynomial-factoring-calculator.php But everything is cool, because the theorem only states that if you can find it, then polynomial is irreducible, not the other way around. And moreover, you can still use the theorem indirectly by applying transformations that preserve reducibility, such as these for example: x=y+1 => x2+1=y2+2y+2 for which p=2 satisfies the criterion and reversing coefficients on that to achieve the second example. Proof that it works is here: https://math.stackexchange.com/questions/1758745/prove-that-fx-is-irreducible-iff-its-reciprocal-polynomial-fx-is-irred
  4. But it is divisible by p, and you left it out in condition (ii). Besides that, not only that this theorem holds, but it also doesn’t need any adjustments if 1 is considered prime. 1 is simply always skipped in the search for adequate prime that satisfies all three conditions, it doesn’t have to be excluded in the premise.
  5. As the discussion is not alive anymore, it is time for a small evaluation of performance of each participant. I will start with me, pointing out what I am satisfied with, and what I am less happy with. I made nice generalization of wtf’s example: validness of results of arithmetic operations is mandatory and therefore must not depend on conventions on how to write down arithmetic expressions, which are arbitrary. And it really does not depend, as long as conventions are strictly followed, that is another instance of general truth, which says that something that is not arbitrary, cannot depend on something that is. First instance of that truth was that validness of theorems cannot depend on definitions, which, however naturally chosen, are still conventions, and therefore, kind of arbitrary (not entirely, because noone wants unnatural choice). The interesting thing here in our case, is that perspective on which definition is more natural, rapidly changes with acquiring mathematical experience. Every newbie in elementary school wonders why is 1 excluded when presented for the first time with the definition of prime number, that is, if she or he is sufficiently perceptive, and sufficiently well informed, otherwise, one might leave school unaware of that fact. However, that wondering should quickly disappear if one is also presented with at least basics of number theory during school education, such as I was not. Because, if we get back to Eisenstein’s criterion for example, it is obvious that all three conditions regarding integer coefficients of polynomial are equally damaged by inclusion of 1, because not only there can be no case for (i) and (iii), if p is 1, but (ii) is also always the case, which is equally damaging to the point of theorem, and therefore possibility that p is 1 simply must be excluded. I immediately made nice generalization of studiot’s example, and correctly made conclusion that there is no reason to single out this theorem, regarding that matter. Because, there is a huge difference between conclusions: “every theorem that states...” and “just these two...”, or “just this one, which would even not hold if...?!” Anyway, I am less proud of the fact that I tried to be more precise than it was possible, because, indivisibility is a kind of divisibility (a negated one), and, well, for a moment it seemed to me that condition (ii) is less damaged than the other two, which is not true. I also think that I pointed out nicely the huge difference between these two explanations: “the deepest reason 1 isn't a prime is because it's a unit in the ring of integers” and “if it makes you happy, 1 isn't a prime because that's the convention, it really makes no difference, what is your problem?, it is just convention, and if you don’t understand what that means let me give you an example with traffic lights”.
  6. HallsofIvy, how did you manage to add so little new information to this discussion, almost nothing? I have searched for some new sources of information, for example: https://math.stackexchange.com/questions/120/why-is-1-not-a-prime-number Here user7530 says: Given how often "let p be an odd prime" shows up in theorems, sometimes I wonder if we'd be better off defining 2 as non-prime too I read that as if 1 was prime, every theorem that now says “let p be an odd prime” would have to be adjusted to say “let p be a prime greater than 2”, and every one that now says “let p be a prime” would have to say “let p be a prime greater than 1”. That sounds to me like a compelling reason to exclude 1 by definition, even though this was not mentioned by Lamb at all.
  7. My point is that Eisenstein's criterion holds regardless of whether we consider 1 prime or not. You just have to express it a little bit differently, depending on your choice. I agree, validness that should not be arbitrary must not depend on something that is arbitrary, that was my point too. Is there some other point here? Yeah well, it obviously mattered something to those who established that convention, and their choice although arbitrary was not whimsical, I presume. And if it makes no difference to you, you shouldn’t have bothered to explain their reasons, which must be rational, and hence the subject of my interest. If you include 1 once, in the definition of primes, and you have to exclude it hundreds of times afterwards, during the development of the number theory, in the premises of theorems, that makes no sense. That is why I asked how many such cases might be there, besides the fundamental theorem of arithmetic, and what is their common denominator. studiot showed up, and singled out just one, claiming later it is the only example in the real numbers he could think of. Now you mention all the theorems that should be adjusted accordingly, if the convention was changed. That does not necessarily contradict studiot, because of his constraint, but I guess the number is much bigger than just these two theorems.
  8. I am sorry, but this sounds a bit like a complete nonsense. Of course that it would still hold, you would just need to start differently, for example with: “If p is a prime number greater than 1, and...” Precisely that is the reason why your previous claim is nonsense: it would be sad if a validity of a theorem could depend on personal preference regarding convention. Fortunately that is not the case in reality, because that would mean that the theorem does not hold for them. No, I did not, why? I mean I studied its proof, just for fun and relaxation, but that did not answer your question.
  9. ...maybe it would be more precisely to say...
  10. If we get back to fundamental theorem of arithmetic, and its unique factorization, it would be formally obstructed by admitting 1 as prime, but essentially not much, because everyone understands that (paraphrasing Lamb): We don’t consider units to be either prime or composite because you can multiply by them without changing much On the other hand, here: maybe it was more precise to say that if these statements deal with indivisibility by p it would be problematic to maintain the point without excluding 1, because no number is indivisible by 1, which is similar argument to yours, of prime ideal.
  11. Thanks studiot, at the moment, it seems to me that the summarization I was looking for, is rather trivial. Namely, nearly each theorem of the form that includes the statement “let p be prime” and further statement(s) that deal with divisibility of something by p, and I can imagine most of them do in some way, would most probably need to be rewritten so that the first statement states instead “let p be prime other than 1”, in order to maintain their point, right? And if not each, precisely which not? Besides that, this definition you posted includes 1 to primes.
  12. That excerpt from the article is the main reason I posted it in the first place. And I knew that argument from my mathematical training too, ie before reading that article. But, if that is the only place where it shows to be useful, it is not a great economy, because you could have said there: “as a product of primes other than 1 in exactly one way”, and have smoother definition that does not exclude 1 in awkward way, right?
  13. As conciseness is one of main mathematical features, I would like to discuss one particular instance of it. Can someone please summarize in that context the usefulness of excluding number one from the set of prime numbers? As the definition of prime numbers would be more concise without it, ie if one was included, and in fact it was at the beginning, first great contributors to number theory who laid foundations to prime number theory considered it to be prime, exclusion was introduced later, without much change in the essence of the theory, so it must have payed off somehow in terms of development of shorter expressions of consequences of somewhat longer definition, and I would like to know all places where it showed to be the case. So, the definition is, for those who are really unfamiliar with the topic: prime numbers are natural numbers that are divisible by exactly two distinct divisors, by one and by themselves. The definition that would include one, would be like this: prime numbers are natural numbers that are divisible only by one and by themselves. Note that further shortening of the definition by omitting the crucial condition “only by” would be a blunder, since all natural numbers are divisible by one and by themselves, which would of course make the definition pointless. I know there are already many answers online to the question “why 1 is not prime”, such as https://blogs.scientificamerican.com/roots-of-unity/why-isnt-1-a-prime-number/ for example, but I didn’t find a satisfactory one.
  14. That difference between these two differences can be defined like this: Consider two objects o1 and o2, if they share common property p, for which subtraction is well defined operation, then the quantitative difference between them with respect to property p is o1.p-o2.p, where the operands are values of property p of objects o1 and o2 respectively. If the property p is not common, that is, not defined for any of the two objects, then this condition can be defined as qualitative difference between them, another definition that can be introduced is that we can say that they are not instances of the same class. From that definition follows that in order to belong to the same class, objects must have in common all their properties. From there naturaly arises the extension operation with respect to uncommon properties, that can be defined on classes, which removes the qualitative difference present with respect to uncommon property, by assigning the value zero to the property in a class in which the property is originally missing. For example, 2D point class of objects, can be extended to 3D class of objects, by assigning zero to the third coordinate that is originally missing in 2D point class. Let us define that classes can consist additionally of methods, besides properties, and that methods can be implemented, in which case classes are called concrete, and they allow instantiation of objects, otherwise they are abstract. In that case, abstract classes may be defined as qualitatively same, if they have same properties and methods, and criterion of equality of methods is that they have same names and signatures, that is input and output argument names and types. Concrete classes methods must have the same implementation additionally, in order to be qualitatively equal. I do not see sensible way of defining quantitative difference on objects of the same class, with respect to methods. What did I just describe? Is it already covered in some mathematical discipline, or is it just a philosophy based on object oriented programming paradigm?
  15. Or humor. That bloke is real Sacha Baron Cohen of scienceforums.net, he and Farid.
  16. This is also an interesting on-topic post: https://www.scienceforums.net/topic/119685-quantification-of-the-unconscious-mind-in-brain-function/?do=findComment&comment=1111600
  17. As I mentioned them in this discussion, I have studied these topics, just for fun and relaxation, and in order to know at least a little bit things I am talking about: https://en.wikipedia.org/wiki/Neuroimaging https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface Here I found out that there is a middle way, between resting in a complete unfamiliarity with the subject, and designing such a system from scratch in DIY manner (which would be impossible for me). That way is to buy some low-cost equipment and join the open source community, such as https://openbci.com , which is actually cool, although I am affraid I am not much of an engineer, not even for that. Anyway, that led me to the third topic, that I also mentioned indirectly: https://en.wikipedia.org/wiki/Brain-reading and that is the actual holy grail of the discussion about "life and consciousness", but there it unfortunately says: >>Experts are unsure of how far thought identification can expand, but Marcel Just believed in 2014 that in 3–5 years there will be a machine that is able to read complex thoughts such as 'I hate so-and-so'.<< If they manage to do it, that would be awesome. So, not just able to distinguish between such diverse states of mind as: "focused thinking", "listening to music", "furiously angry", "sleeping", "unconscious", "coma", ... or between less diverse mind states of focused thinking such as: "playing chess" , "working on math", "reading a technical documentation", ... but to the level of specific precision such as: "considering Qe5", "proving Thales theorem", "reading about OpenBCI Cyton Board" and being general at the same time, that is, not restricted to some narrow area of mind reading. I have a strange feeling that you were talking about these things, when you were mentioning "quantification of definition of consciousness, using a cogent basis relatable to human experience", you just couldn't say that in a plain and straightforward way as I just did, or you just chose to avoid saying it that way for some odd reason.
  18. Now, this sounds overly defensive, for two reasons. First one is that Wikipedia may be not a sacrosanct resource, but it mentions characterization of instinct similar to yours: >>Jean Henri Fabre (1823-1915), an entomologist, considered instinct to be any behavior which did not require cognition or consciousness to perform.<< Because: >>Cognition is "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses".<< So, although thought was not directly mentioned in relation to instinct, indirectly it was mentioned as a way of acquiring knowledge, that is learning, which is not required for instinctive behavior. And the other characterization you were fixed on is also mentioned: >>For example, people may be able to modify a stimulated fixed action pattern by consciously recognizing the point of its activation and simply stop doing it, whereas animals without a sufficiently strong volitional capacity may not be able to disengage from their fixed action patterns, once activated.<< The difference between behavior one engages without the appearance of a thought process, and behavior that is not learned, is that learned behavior can be automated due to a regular performance and can be performed without requiring much thinking and great focus and attention of our consciousness. For example, driving a car can be one such activity, depending on the road circumstances, that may allow us to think about things unrelated to driving, so that we almost forget about driving. However, if monotony is broken by a pedestrian jumping in front of our car, that would require instantaneous reaction we may be tempted to call instinctive, because of its speed, and because it doesn't require much thinking to find out what is one supposed to do in that situation. However, both unfocused driving and fast stopping the car are things we didn't know at the moment when we were born, we had to learn where are brakes in our car and how to use them, so even if it doesn't require much thinking now when one does it, it doesn't mean it didn't require thinking at the time it was learned. But, since you didn't strictly specify the timing of appearance of a thought process (not) required for instinctive behavior, you can claim it covers any time. Theorizing about patterns of mental activities and behaviors, by establishing its systematic characterization and classification should not be a main purpose of anyone's dealing with the subject. Being able to say if something counts as instinct or mentation behavior, or something third, looks like armchair philosophy, while ability to design a system that is able to monitor brain activity and find out and display accurately what the brain does, using some technic of neuroimaging, and machine learning, proves a whole different level of understanding and technical and practical usefulness. Application of such a system would be for a brain-computer interface, medical diagnostics, you name it... Actually, I started to grow a big interest in these things after a colleague of mine had a stroke, and never fully recovered from it, after five years he still cannot move one arm and leg. The other colleague had a large brain tumor removed, and recovered immediately fully from such a condition with no apparent consequences, although he had a problem finding a surgeon that was willing to accept the risk of operation, as the tumor was advanced. I don't know how much neuroplasticity was in question in each case, I guess more in the first case then in the other, because the tumor tissue was not functional anyway, I guess.
  19. With regard to that, interesting is both the ability to accumulate knowledge by learning during the lifespan of a single organism, as well as over generations. And one might say that this ability is negligible in majority of species in comparison to human ability, but still not zero. As I was intrigued, I did another check, and that is how many times the string "instinct" appears here https://en.wikipedia.org/wiki/Thought (twice), as well as how many times string "thought" appears here https://en.wikipedia.org/wiki/Instinct (once). Not that I esteem very much the current state of exactness of these subjects as presented by mainstream science, but still, can't ignore that completely. The number of the meanings of the word "thought" given there is really impressive, and at least it is honestly admitted that there is still no consensus as to how it is adequately defined or understood.
  20. During this discussion, I wondered all the time will my fellow participant and opponent at some moment consult the freely available sources of information on the current state of affairs in science regarding these topic, such as wikipedia, to at least check if there is a potential discrepancy between his definitions and the official ones. I knew I would, and I knew I deliberately didn't so far, because of my disappointment with what I learned about it in school, a few decades ago, but for the sake of completeness it should be done: https://en.wikipedia.org/wiki/Instinct So, here it says that basic definition is: "Instinct or innate behavior is the inherent inclination of a living organism towards a particular complex behavior." As dictionary says that "innate" and "inborn" are more or less synonims, it is not strange to find that: "Any behavior is instinctive if it is performed without being based upon prior experience (that is, in the absence of learning)." Elaboration continues, giving futher characterization: "The simplest example of an instinctive behavior is a fixed action pattern (FAP), in which a very short to medium length sequence of actions, without variation, are carried out in response to a corresponding clearly defined stimulus." There is also a chapter on physiological difference between a reflex and instinct: "The stimulus in a reflex may not require brain activity but instead may travel to the spinal cord as a message that is then transmitted back through the body, tracing a path called the reflex arc. Reflexes are similar to fixed action patterns in that most reflexes meet the criteria of a FAP. However, a fixed action pattern can be processed in the brain as well;" So basically, emphasis is not that much on the ability of control (or choice) of engaging in such behavior, as much as on the origin of that behavior, and (in)ability to change such behavior: "Though an instinct is defined by its invariant innate characteristics, details of its performance can be changed by experience; for example, a dog can improve its fighting skills by practice." So yeah, the devil is always in details, that show that the official definitions are also not clear cut precision. Summary: purely instinctive species would be, according to these definitions, those that are incapable of learning, and improving their behavior. Which is IMHO, not existing in nature. Not all have the same strength of that capability (obviously), but saying that there are some who cannot change their behaviour at all, is similar as claiming that there are some that cannot evolve genetically. And those who can rapidly change their behavior, less depend on the need to change their genetics as a response to environmental pressure. I guess.
  21. I understand you much better than you give me credit for, or even realize. True, English is not my native language, but you didn’t make an effort to learn mine, and that is what makes difference between you and me from the start. You have no response to my questions and arguments, forget about my definitions, I had no ambitions to give any, because I know it is extremely hard to come up with sufficiently precise ones, regarding that matter you are talking about. Smart withdrawal from discussion, as you realize that too.
  22. I defined that? So this doesn't count as your definitions any more? How did you define thought? I'm glad that you admit that your definitions are poor, and not very scientific, regardless of the fact that you assign them to me now. You seem to be programmed not to stick that much to your own words and thoughts, except for constant repeating that people make conclusions using their own brain and experience, and not that of horse. What a great insight. OK, so now this does not have anything to do with human abilities, it is about that that some species are "purely automatic" beings, and others "can control their behavior", basically because "they can think". And main purpose of your definitions is to distinguish between them, am I right now? If you can envisage something else they can be useful for, please just add it to the list. OK then, so where is that dividing line exactly in the evolution tree? Second question, those that can think, if they think because they are genetically programmed to think, why you consider that non automatic activity? I mean, programs are executed automatically once they are started, and their genetic program started to run from the moment their genetic code was created. Third question, those that cannot control their behavior, can they control anything else in their life, or are they totally controlled by their program, how do they make choices in your opinion?
  23. You should understand that sensible and useful abstract notions, such as numbers for example, are not human specific, in a sense that they are understandable by other species too, and that instinctive behavior defined as animal like behavior, contrasted with mentation process defined as neural activity that enables human like behavior, are not sensible and useful abstract notions, simply because their definitions are poor.
  24. Although it is important to strive towards objectivity, in a colloquial conversation not every statement is supposed to be objective. Obviously, my "confidence in apes intelligence" has to do with scientific rigor as much as your definitions. It is more a statement of admiration than anything else, and it was exactly because of comparison between chimps and humans, although, both memory speed and accuracy can be measured objectively, without reference to human memory speed and accuracy. Besides that, I don't know if you know what experiments I refer to, but these could not be performed to confirm intelligence of some species that has no visual sense, due to a nature of task they are supposed to perform. Just as much as mirror experiments are inadequate to establish consciousness of such species...
  25. It seems that nowadays these studies are performed on animals in which that condition was not artificially induced for experimental needs. Kudos to human race. I don't know what ideas a chimpanzee can explore during his or her life. I bet they can learn to recognize that word invented by humans, understand it perfectly, and demonstrate that somehow to people. I have great confidence in apes intelligence, after I saw how fast and accurate their memory is. It seems that detecting consciousness depends a lot on the interspecies communication barrier. The higher that barrier, the less we are inclined to acknowledge consciousness on the other side, and that can be said for intelligence too. Although, this is biased and irrational. However, if there is a proof that consciousness requires neural network to be implemented, I may stand corrected regarding plants. Although, maybe the same functions as in neural networks can be implemented in plants information network too.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.