Jump to content


Senior Members
  • Posts

  • Joined

  • Last visited

Everything posted by ProgrammingGodJordan

  1. This thread concerns a discussion about why physicist need to study consciousness, and probably involve themselves in the development of artificial general intelligence. Firstly, below is a scintillating talk on artificial general intelligence given by, Theoretical Physics PHD, and former Dwave CTO, Geordie Rose. He left Dwave, and is now instead CEO and co-founder of Kindred Ai. He gives an entertaining talk in the tech-vancouver video below, about why the development of artificial general intelligence is crucial for mankind. Extra: See also a talk here, by Suzanne Gildert here (She also left Dwave, to start Kindred Ai. She has an Quantum Physics PHD as well, and used to work on superconducting chips at Dwave) Secondly, as Max Tegmark expressed in a youtube video here, physicists have long neglected to define the observer in much of the equations. (The observer being the intelligent agent) Alert: Notably, when I refer to consciousness below, I echo Bengio's words from one of his recent papers, "I do not refer here to more elusive meanings that have been attributed to the word “consciousness” (like qualia (Kriegel, 2014)), sticking instead to the notion of attentive awareness in the moment, our ability to focus on information in our minds which is accessible for verbal report, reasoning, and the control of behaviour." As far as science goes, consciousness is likely definable in terms of very complex equations, from disciplines, like Physics; as an example, degrees of general structures such as manifolds central to physics and mathematics, are now quite prevalent in the study of Deep Learning. Footnote: As I indicate in the content above, there are two crucial end points: (1) Physics intends to describe the cosmos, and as Max Tegmark mentions, a non trivial portion of physics, namely the observer has long eluded physics, and so the observer's framework/consciousness (as described in the alert above) warrants non-trivial analysis/development. (2) Understanding consciousness (as described in the alert above), may lend large help to the development of artificial general intelligence, which is often underlined as mankind's last invention, that apart from solving many human problems, (i.e. Self-taught artificial intelligence beats doctors at predicting heart attacks) may also aid in the development of physics. (i.e. Ai learns and recreates Nobel-winning physics experiment)
  2. As Max Tegmark expressed in a youtube video here, physicists have long neglected to define the observer in much of the equations. (The observer being the intelligent agent) Perhaps consciousness may be defined in terms of very complex equations, from disciplines, like Physics; as an example, degrees of general structures such as manifolds central to physics and mathematics, are now quite prevalent in the study of Deep Learning.
  3. Your talk reminds me of Deepak Chopra. Chopra quote: "overthrowing the climactic overthrow of the superstition of materialism". Advice: Try not to sound like Chopra.
  4. Yes, you can use memory to look on the merely three standard trig rule collapser forms. (Just like the memory you could use to memorize the many many more standard trig identities) So, using my collapser is still cheaper than looking up the many more trig identities. You gain shorter evaluations, and you also compute with far less lookup. FOOTNOTE: "Uncool", thanks for your questions. I have improved the "Clear explanation" section in the paper, and removed some distracting typos too. (In addition, in the original post, the term: "∫ (√1-sin2θ/(√cos2θ) ) · cosθ dθ". should have been: "∫ (√1-sin2θ) · cosθdθ" instead, based on the problem in the video)
  5. No. Notice this preliminary Step (1): ∫ sin²ϴ · √1-sin²ϴ · cosϴ dϴ With my collapser, you can easily identify cosϴ from the initial substitution line; so instead of writing down the "√1-sin²ϴ" term, then finding cos²ϴ, then square rooting it, you go straight ahead and evaluate cosϴ in the integral. As a result, you don't need to look for cosϴ from 1-sin²ϴ in the identity table, and you don't need to square root. In the scenario above, three preliminary lines are replaced (excluding explicit multiplication) and in other problems, more preliminary lines may be replaced (also excluding explicit multiplication). Either way, you avoid searching the identity table to begin evaluation, and you avoid square rooting.
  6. Without TrigRuleCollapser: Let x = sinϴ ⟹ dx = cosϴ dϴ Preliminary Step (1): ∫ sin²ϴ · √1-sin²ϴ · cosϴ dϴ Preliminary Step (2): ∫ sin²ϴ · cosϴ · cosϴ dϴ Evaluation: ∫ sin²ϴ · cos²ϴ dϴ ... ... ****************** With TrigRuleCollapser: Let x = sinϴ ⟹ dx = cosϴ dϴ Evaluation: ∫ sin²ϴ · cos²ϴ dϴ ... ... (No preliminary steps required, you Evaluate rule: xn · dx/dϴ · dx)
  7. Good advice. I know it is excellent advice, because I had recently invented a framework for thought, that enforces heavy scientific scrutiny. I know how to isolate symbols and analyse them too, because I had invented some small degree of calculus in the past.
  8. No wonder Ai researchers are still in the regime of euclidean space instead of euclidean superspace. For example, here is yet another paper, concerning manifold learning - mean field theory, in the Riemannian geometry: https://arxiv.org/pdf/1606.05340v2.pdf The intriguing paper above is manifold resource I could learn from, in order to continue the supermanifold hypothesis in deep learning.
  9. Thanks for the helpful message and references. END-NOTE: Source a provided a spark for researching super-symmetry in a computational manner. Source b provides one of the richest resources for deep learning, while underlining manifolds, that bare non trivial relation to source a. While sources like a and b persist as rich sources of data usable for the task at hand, I detected that they alone would probably not suffice, given that considering supermanifolds beyond manifolds in deep learning, is as far as I could observe, novel waters. So, I know that it is likely quite necessary to study and experiment beyond the sources I presented.
  10. Based at least, on the contents of bengio's deep learning book, I am at least knowledgeable about a good portion of the symbols, (some used in relation to superspace as seen in the OP's paper)
  11. The formulation works for many classes of integrals whose problems constitute some square rooted distribution. Unfortunately, I don't know whether universal ways of collapsing are possible.
  12. Thanks for the supportive, considerate message. Yes, I at least know of the class of symmetry groups that are required. (Relating to the bosonic riccati) However, do you know anything about Montroll kinks, and the degrees of freedom it affords in variations of signal energy transfer in biological brains? FOOTNOTE: When I said learning the laws of physics in the third response above in this thread, in particular, I was referring to the supersymmetric structure, rather than my self, much like how deepmind's manifold based early concept learner infers laws of physics, based on the input space of pixels. Models that learn things better than humans, is typical in deep learning.
  13. The short answer: As I answered above, (and as the papers outline), the goal is to use a supermanifold structure, in a bellmanian like regime, much like how Googke Deepmind uses manifolds in their recent paper. The longer answer: At least, from ϕ(x;θ)Tw, or the machine learning paradigm: In the machine learning regime, something like the following applies: Jordan Bennett's answer to What is the Manifold Hypothesis in Deep Learning? FOOTNOTE: I don't know much about supermathematics at all, but based at least, on the generalizability of manifolds and supermanifolds, together with evidence that supersymmetry applies in cognitive science, I could formulate algebra with respect to the deep learning variant of manifolds. This means that given the nature of supermanifolds and manifolds, there is no law preventing ϕ(x;θ,)Tw, some structure in euclidean superspace that may subsume pˆdata (real valued training samples), over some temporal difference hyperplane.
  14. Machine learning models use some structure as their memory, in order to do representations based on some input space. Supermathematics may be used to represent some input space, given evidence that super-symmetry applies in cognitive science. Learning the laws of physics may be a crucial part of the aforementioned input space, or task. Pay attention to the segments below. [12] refers to: https://arxiv.org/abs/0705.1134
  15. This is a clear explanation w.r.t. the "Trigonometric Rule Collapser Set", that may perhaps be helpful. (See source) The above is not to be confused for u-substitution. (See why) In the sequence: x = sin t, where dx = cost dt, and 1 − x2 = 1 − sin2 t = cos2 t ....(from problem: ∫ √1- x2) ..the novel formulation dx | dt · dx occurs such that the default way of working trigonometric equations is compressed, permitting the reduction of the cardinality of steps normally employable. For example, in the video above, while evaluating ∫ √1- x2 dx, in a preliminary step, the instructor writes ∫ (√1-sin2θ/(√cos2θ) ) · cosθdθ. Using my trig collapser routine, this step (which may be a set of steps for other problems) is unnecessary, because applying my trig collapser set's novel form: dx | dθ · dx, we can just go right ahead and evaluate ∫cosθ·cosθdθ. The trigonometric rule collapser set may be a topic that may be an avenue which may spark further studies.
  16. Yes, that was a bit confusing. This should be a better explanation: https://drive.google.com/file/d/0B8H3Ghe4haTWbW1uVGxiZ3ZqbEk/view
  17. This thread concerns attempts to construct artificial general intelligence, which I often underline may likely be mankind's last invention.I am asking anybody that knows supermathematics and machine learning to pitch in the discussion below.PART ABack in 2016, I read somewhere that babies know some physics intuitively. Also, it is empirically observable that babies use that intuition to develop abstractions of knowledge, in a reinforcement learning like manner.PART BNow, I knew beforehand of two types of major deep learning models, that:(1) used reinforcement learning. (Deepmind Atari q)(2) learn laws of physics. (Uetorch)However:(a) Object detectors like (2) use something called pooling to gain translation invariance over objects, so that the model learns regardless of where the object in the image is positioned(b) Instead, (1) excludes pooling, because (1) requires translation variance, in order for Q learning to apply on the changing positions of the objects in pixels.PART CAs I result I sought a model that could deliver both translation invariance and variance at the same time, and reasonably, part of the solution was models that disentangled factors of variation, i.e. manifold learning frameworks.I didn't stop my scientific thinking at manifold learning though.Given that cognitive science may be used to constrain machine learning models (similar to how firms like Deepmind often use cognitive science as a boundary on the deep learning models they produce) I sought to create a disentanglable model that was as constrained by cognitive science, as far as algebra would permit.PART DAs a result I created something called the supermanifold hypothesis in deep learning (a component in another description called 'thought curvature'). This was due to evidence of supersymmetry in cognitive science; I compacted machine learning related algebra for disentangling, in the regime of supermanifolds. This could be seen as an extension of manifold learning in artificial intelligence.Given that the supermanifold hypothesis compounds ϕ(x,,)Tw, here is an annotation of the hypothesis: Deep Learning entails ϕ(x;)Tw, that denotes the input space x, and learnt representations . Deep Learning underlines that coordinates or latent spaces in the manifold framework, are learnt features/representations, or directions that are sparse configurations of coordinates. Supermathematics entails (x,,), that denotes some x valued coordinate distribution, and by extension, directions that compact coordinates via , . As such, the aforesaid (x,,), is subject to coordinate transformation. Thereafter 1, 2, 3, 4 and supersymmetry in cognitive science, within the generalizable nature of euclidean space, reasonably effectuates ϕ(x,,)Tw. QUESTIONS:Does anybody here have good knowledge of supermathematics or related field, to give any input on the above?If so is it feasible to pursue the model I present in supermanifold hypothesis paper?And if so, apart from the ones discussed in the paper, what type of pˆdata (training samples) do you garner warrants reasonable experiments in the regime of the model I presented?
  18. Did you miss the OP? No where there, did I express that "we are the creators of the universe". FOOTNOTE: Notably, I am atheistic, and the original post likewise stipulates of Sam's atheistic nature.
  19. I apologize, for I am unable to parse your comment above. Of what consequence is your comment above? FOOTNOTE: I don't watch football, and I know not who "Pele" is.
  20. Yes, I had enjoyed a majority of Asimov's publications some years ago, including the Last Question, and The Last Answer. I recall of an intriguing short story, also concerning artificial general intelligence, called "A senseless conversation", by Zach Barnett. FOOTNOTE: As such, the source presented, merely underlined a simple, scientifically grounded equivalence, for the long-established impression amidst the topic of creation, predominant to religious persuasion, namely God. Such a God, as redefined, was as I had long expressed, similar to the God Sam Harris was referring to, pertinently in the regime of artificial general intelligence.
  21. Why do you bother to ignore evidence? https://www.google.com.jm/search?q=concern&oq=concern&aqs=chrome..69i57.1064j0j1&sourceid=chrome&ie=UTF-8
  22. No. I clearly expressed that belief is a model, that permits both science, and non-science. However, belief typically facilitates that people especially ignore evidence. (As research, and definitions show) A model that permits the large ignorance of evidence contrasts science. Instead, we may employ scientific thinking, that largely prioritizes evidence, rather than a model (i.e belief) that facilitates largely, ignorance of evidence. To concern may be to consider. So, an alternative is: "We may highly consider evidence.".
  23. Reminds me of some words said to me in the distant past: "Maybe our misery and or joy, and our cosmos as a whole, is merely entainment, as a game much like gta, created by intelligent entities".
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.