thoughtfuhk

Senior Members
  • Content count

    52
  • Joined

  • Last visited

Community Reputation

-4 Poor

About thoughtfuhk

  • Rank
    Meson

Recent Profile Visitors

377 profile views
  1. 1.) Reasonably, evolution is optimising ways of contributing to the increase of entropy, as systems very slowly approach equilibrium. (The universe’s predicted end) a.) Within that process, work or activities done through several ranges of intelligent behaviour are reasonably ways of contributing to the increase of entropy. (See source) b.) As species got more and more intelligent, reasonably, nature was finding better ways to contribute to increases of entropy. (Intelligent systems can be observed as being biased towards entropy maximization) c.) Humans are slowly getting smarter, but even if we augment our intellect by CRISPR-like routines or implants, we will reasonably be limited by how many computational units or neurons etc fit in our skulls. d.) AGI/ASI won’t be subject to the size of the human skull/human cognitive hardware. (Laws of physics/thermodynamics permits human exceeding intelligence in non biological form) e.) As AGI/ASI won’t face the limits that humans do, they are a subsequent step (though non biological) particularly in the regime of contributing to better ways of increasing entropy, compared to humans. 2.) The above is why the purpose of the human species, is reasonably to create AGI/ASI. There are many degrees of freedom or many ways to contribute to entropy increase. This degree sequence is a “configuration space” or “system space”, or total set of possible actions or events, and in particular, there are “paths” along the space that simply describe ways to contribute to entropy maximization. These “paths” are activities in nature, over some time scale “ [math]\tau[/math] ” and beyond. As such, as observed in nature, intelligent agents generate particular “paths” (intelligent activities) that prioritize efficiency in entropy maximization, over more general paths that don’t care about or deal with intelligence. In this way, intelligent agents are “biased”, because they occur in a particular region (do particular activities) in the “configuration space” or “system space” or total possible actions in nature. Highly intelligent agents aren’t merely biased for the sake of doing distinct things (i.e. cognitive tasks) compared to non intelligent, or other less intelligent agents in nature for contributing to entropy increase; they are biased by extension, for behaving in ways that are actually more effective ways for maximising entropy production,compared to non intelligent or less intelligent agents in nature. As such, the total system space, can be described wrt to a general function, in relation to how activities may generally increase entropy, afforded by degrees of freedom in said space: [math]S_c(X,\tau) = -k_B \int_{x(t)} Pr(x(t)|x(0)) ln Pr(x(t)|x(0)) Dx(t)[/math] Equation(2) 6. In general, agents are demonstrated to approach more and more complicated macroscopic states(from smaller/earlier, less efficient entropy maximization states called “microstates”), while activities occur that are “paths” in the total system space. 6.b) Highly intelligent agents, behave in ways that engender unique paths, (by doing cognitive tasks/activities compared to simple tasks done by lesser intelligences or non intelligent things) and by doing so they approach or consume or “reach” more of the aforementioned macroscopic states, in comparison to lesser intelligences, and non intelligence. 6.c) In other words, highly intelligent agents access more of the total actions or configuration space or degrees of freedom in nature, the same degrees of freedom associated with entropy maximization. 6.d) In this way, there is a “causal force”, which constrains the degrees of freedom seen in the total configuration space or total ways to increase entropy, in the form of humans, and this constrained sequence of intelligent or cognitive activities is the way in which said highly intelligent things are said to be biased to maximize entropy: [math]F_0(X,\tau) = T_c \nabla_X S_c(X,\tau) | X_0[/math]Equation(4) 7) In the extension of equation (2), seen in equation (4) above, "[math]T_c[/math]" is a way to observe the various unique states that a highly intelligent agent nay occupy, over some time scale "[math]\tau[/math]"....(The technical way to say this, is that "[math]T_c[/math] parametrizes the agents' bias towards entropy maximization". 8) Beyond human intelligence, AGI/ASI are yet more ways that shall reasonably permit more and more access to activities or "paths" to maximise entropy increase. A) Looking at item (8), one may see that human objective/goal is reasonably to trigger a next step in the landscape of things that can access more ways to maximize entropy. (Science likes objectivity) B) The trend says nature doesn't just stop at one species, it finds more and more ways to access more entropy maximization techniques. Humans are one way to get to whichever subsequent step will yield more ways (aka more intelligence...i.e. AGI/ASI) that shall generate additional "macrostates" or paths towards better entropy maximization methods.
  2. The “Supersymmetric Artificial Neural Network” (or “Edward Witten/String theory powered artificial neural network”) is a Lie Superalgebra aligned algorithmic learning model (created by myself 2 years ago), based on evidence pertaining to Supersymmetry in the biological brain. A Deep Learning overview, by gauge group notation: There has been a clear progression of “solution geometries”, ranging from those of the ancient Perceptron to complex valued neural nets, grassmann manifold artificial neural networks or unitaryRNNs. These models may be denoted by [math]\phi(x,\theta)^{\top}w [/math] parameterized by [math]\theta[/math], expressible as geometrical groups ranging from orthogonal to special unitary group based: [math]SO(n)[/math] to [math]SU(n)[/math]..., and they got better at representing input data i.e. representing richer weights, thus the learning models generated better hypotheses or guesses. By “solution geometry” I mean simply the class of regions where an algorithm's weights may lie, when generating those weights to do some task. As such, if one follows cognitive science, one would know that biological brains may be measured in terms of supersymmetric operations. (Perez et al, “Supersymmetry at brain scale”) These supersymmetric biological brain representations can be represented by supercharge compatible special unitary notation [math]SU(m|n)[/math], or [math]\phi(x,\theta, \bar{{\theta}})^{\top}w[/math] parameterized by [math]\theta, \bar{{\theta}}[/math], which are supersymmetric directions, unlike [math]\theta[/math] seen in item (1). Notably, Supersymmetric values can encode or represent more information than the prior classes seen in (1), in terms of “partner potential” signals for example. So, state of the art machine learning work forming [math]U(n)[/math] or [math]SU(n)[/math] based solution geometries, although non-supersymmetric, are already in the family of supersymmetric solution geometries that may be observed as occurring in biological brain or [math]SU(m|n)[/math] supergroup representation. Psuedocode for the "Supersymmetric Artificial Neural Network": a. Initialize input Supercharge compatible special unitary matrix [math]SU(m|n)[/math]. [See source] (This is the atlas seen in b.) b. Compute [math]\nabla C[/math] w.r.t. to [math]SU(m|n)[/math], where [math]C[/math] is some cost manifold. Weight space is reasonably some K¨ahler potential like form: [math]K(\phi,\phi^*)[/math], obtained on some initial projective space [math]CP^{n-1}[/math]. (source) It is feasible that [math]CP^{n-1}[/math] (a [math]C^{\infty}[/math] bound atlas) may be obtained from charts of grassmann manifold networks where there exists some invertible submatrix entailing matrix [math]A \in \phi_i (U_i \cap U_j)[/math] for [math]U_i = \pi(V_i)[/math] where [math]\pi[/math] is a submersion mapping enabling some differentiable grassmann manifold [math]GF_{k,n}[/math], and [math]V_i = u \in R^{n \times k} : det(u_i) \neq 0[/math]. (source) c. Parameterize [math]SU(m|n)[/math] in [math]-\nabla C[/math] terms, by Darboux transformation. d. Repeat until convergence. References: The “Supersymmetric Artificial Neural Network” on github. “Thought Curvature” paper (2016). Although not on supermanifolds/supersymmetry, but manifolds instead , here’s a talk by separate authors at Harvard University, regarding curvatures in Deep Learning (2017). A relevant debate between Yann Lecun and Marcus Gary, along with my commentary, on the importance of priors in Machine learning. Deepmind’s discussion regarding Neuroscience-Inspired Artificial Intelligence.
  3. A) Look at item (8.b), and you'll see that human objective/goal is reasonably to trigger a next step in the landscape of things that can access more ways to maximize entropy. (Science likes objectivity) B) Remember, the trend says nature doesn't just stop at one species, it finds more and more ways to access more entropy maximization techniques. Humans are one way to get to whichever subsequent step will yield more ways (aka more intelligence...i.e. AGI/ASI) that shall generate additional macrostates or paths towards better entropy maximization methods.
  4. 1) There are many degrees of freedom or many ways to contribute to entropy increase. This degree sequence is a "configuration space" or "system space", or total set of possible actions or events, and in particular, there are "paths" along the space that simply describe ways to contribute to entropy maximization. 3) These "paths" are activities in nature, over some time scale "[math]\tau[/math]" and beyond. 4) As such, as observed in nature, intelligent agents generate particular "paths" (intelligent activities) that prioritize efficiency in entropy maximization, over more general paths that don't care about or deal with intelligence. In this way, intelligent agents are "biased", because they occur in a particular region (do particular activities) in the "configuration space" or "system space" or total possible actions in nature. 5) Highly intelligent agents aren't merely biased for the sake of doing distinct things (i.e. cognitive tasks) compared to non intelligent, or other less intelligent agents in nature for contributing to entropy increase; they are biased by extension, for behaving in ways that are actually more effective ways for maximising entropy production, compared to non intelligent or less intelligent agents in nature. 6) As such, the total system space, can be described wrt to a general function, in relation to how activities may generally increase entropy, afforded by degrees of freedom in said space: [math]S_c(X,\tau) = -k_B \int_{x(t)} Pr(x(t)|x(0)) ln Pr(x(t)|x(0)) Dx(t)[/math] Equation(2) 7.a) In general, agents are demonstrated to approach more and more complicated macroscopic states (from smaller/earlier, less efficient entropy maximization states called "microstates"), while activities occur that are "paths" in the total system space as mentioned before. 7.b) Highly intelligent agents, behave in ways that engender unique paths, (by doing cognitive tasks/activities compared to simple tasks done by lesser intelligences or non intelligent things) and by doing so they approach or consume or "reach" more of the aforementioned macroscopic states, in comparison to lesser intelligences, and non intelligence. 7.c) In other words, highly intelligent agents access more of the total actions or configuration space or degrees of freedom in nature, the same degrees of freedom associated with entropy maximization. 7.d) In this way, there is a "causal force", which constrains the degrees of freedom seen in the total configuration space or total ways to increase entropy, in the form of humans, and this constrained sequence of intelligent or cognitive activities is the way in which said highly intelligent things are said to be biased to maximise entropy: [math]F_0(X,\tau) = T_c \nabla_X S_c(X,\tau) | X_0[/math] Equation(4) 7.e) In the extension of equation (2), seen in equation (4) above, "[math]T_c[/math]" is a way to observe the various unique states that a highly intelligent agent may occupy, over some time scale "[math]\tau[/math]"....(The technical way to say this, is that "[math]T_c[/math] parametrizes the agents' bias towards entropy maximization".) 8.a) Finally, reading is yet another cognitive task, or yet another way for nature/humans to help to access more of the total activities associated with entropy maximization, as described throughout item 7 above. 8.b) Beyond human intelligence, AGI/ASI are yet more ways that shall reasonably permit more and more access to activities or "paths" to maximise entropy increase.
  5. 1) Please refer to the URL you conveniently omitted from your quote of me above. 2) You claiming what I said to be unsupported, especially when I provided a URL, (aka supporting evidence) which you omitted from your response, is clearly dishonest.
  6. An isolated repetition of mine to answer that question: As things got smarter from generation to generation, things got demonstrably better and better at maximizing entropy. (As mentioned before) As entropy maximization got better and better, intelligence got more general. (As mentioned before) More and more general intelligence provided better and better ways to maximize entropy, and it is a law of nature that entropy is increasing, and science shows that this is reasonably tending towards equilibrium, where no more work (or activities) will be possible. (As mentioned before) The reference prior given: http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf
  7. Cognitive tasks refer to activities, done through intelligent behaviour. (As long mentioned) For example, reading is a cognitive task.
  8. I didn't down vote you. Anyway, please see the responses made to strange here.
  9. Your comment is 26 minutes old, or 7 minutes older than my latest edited response. (In other words, I removed the typo 7 minutes before your comment, and the forum probably notified you of that, but you still posted anyway) So, in case you missed the edit, here is the query I asked: Can you explain why you are not satisfied with my prior summary? Nobody has reported lack of understanding of the summary. If they did, I would be motivated to update it, but they are yet to present that they don't understand the summary. And I already explained those things. (See the items in blue)
  10. I've already done so. Can you explain why you are not satisfied with my prior summary? The following ought to help too, although Alexander advocates very very long term human progression, whereas I describe, like Richard Dawkins does, that the human species may not be required after AGI is created:
  11. Of course, what I've said to strange above, applies to you as well. For example, what is it you detect to be invalid, about my prior post (that already summarized the work quite well in relation to my OP that described AGI as a purpose of the human species)? Will I have to explain all details down to what year the authors of the references were born, and what year the authors first attended college?
  12. It seems you'll never be satisfied no matter how clear the responses returned to you are. You were already on an old agenda to support your false preconceived notions on the matter, and so you continue grovel in some delusion that I don't understand what I am presenting. (Simply because the topic appears to be outside of your typical scope of understanding) I've done my best to summarize the work, and I am yet to receive any sensible criticism from you. What more details do you desire? You must grasp by now the connection between intelligence, evolution, and optimization. I don't detect where I've failed to show these connections. If the topic did not exist within the scope of your knowledge before I posted the OP, now you should at least be less ignorant on the matter. Nobody is infallible/omniscient.
  13. What kinds of surfaces exist in Maths?

    Hypersurfaces are extremely common in Machine learning/Deep learning: We see hypersurfaces in how artificial neurons acquire activity from neighbouring neurons, by integration, summation or averaging. Example 1: Weak analogy to biological brain in artificial neuron activation sum (or hypersurface): [math]Neighbouring_{activity}=W \cdot x + b[/math] (Common in old hebbian learning models) Example 2: A less weaker analogy to biological brain in artificial neuron activation sum (or hypersurface): [math]Neighbouring_{activity}=W * x + b[/math] (Representing a convolution denoted by [math]*[/math], common in modern Convolutional neural networks)
  14. To the best of my recollection and abilities, that is what I had been doing all along, including what I posted 4 posts ago: