Jump to content


Senior Members
  • Posts

  • Joined

  • Last visited

1 Follower

About ProgrammingGodJordan

  • Birthday 04/17/1991

Contact Methods

  • Website URL

Profile Information

  • Location
  • Favorite Area of Science
    Artificial Intelligence

Recent Profile Visitors

3556 profile views

ProgrammingGodJordan's Achievements


Baryon (4/13)



  1. This thread concerns a discussion about why physicist need to study consciousness, and probably involve themselves in the development of artificial general intelligence. Firstly, below is a scintillating talk on artificial general intelligence given by, Theoretical Physics PHD, and former Dwave CTO, Geordie Rose. He left Dwave, and is now instead CEO and co-founder of Kindred Ai. He gives an entertaining talk in the tech-vancouver video below, about why the development of artificial general intelligence is crucial for mankind. Extra: See also a talk here, by Suzanne Gildert here (She also left Dwave, to start Kindred Ai. She has an Quantum Physics PHD as well, and used to work on superconducting chips at Dwave) Secondly, as Max Tegmark expressed in a youtube video here, physicists have long neglected to define the observer in much of the equations. (The observer being the intelligent agent) Alert: Notably, when I refer to consciousness below, I echo Bengio's words from one of his recent papers, "I do not refer here to more elusive meanings that have been attributed to the word “consciousness” (like qualia (Kriegel, 2014)), sticking instead to the notion of attentive awareness in the moment, our ability to focus on information in our minds which is accessible for verbal report, reasoning, and the control of behaviour." As far as science goes, consciousness is likely definable in terms of very complex equations, from disciplines, like Physics; as an example, degrees of general structures such as manifolds central to physics and mathematics, are now quite prevalent in the study of Deep Learning. Footnote: As I indicate in the content above, there are two crucial end points: (1) Physics intends to describe the cosmos, and as Max Tegmark mentions, a non trivial portion of physics, namely the observer has long eluded physics, and so the observer's framework/consciousness (as described in the alert above) warrants non-trivial analysis/development. (2) Understanding consciousness (as described in the alert above), may lend large help to the development of artificial general intelligence, which is often underlined as mankind's last invention, that apart from solving many human problems, (i.e. Self-taught artificial intelligence beats doctors at predicting heart attacks) may also aid in the development of physics. (i.e. Ai learns and recreates Nobel-winning physics experiment)
  2. As Max Tegmark expressed in a youtube video here, physicists have long neglected to define the observer in much of the equations. (The observer being the intelligent agent) Perhaps consciousness may be defined in terms of very complex equations, from disciplines, like Physics; as an example, degrees of general structures such as manifolds central to physics and mathematics, are now quite prevalent in the study of Deep Learning.
  3. Your talk reminds me of Deepak Chopra. Chopra quote: "overthrowing the climactic overthrow of the superstition of materialism". Advice: Try not to sound like Chopra.
  4. Yes, you can use memory to look on the merely three standard trig rule collapser forms. (Just like the memory you could use to memorize the many many more standard trig identities) So, using my collapser is still cheaper than looking up the many more trig identities. You gain shorter evaluations, and you also compute with far less lookup. FOOTNOTE: "Uncool", thanks for your questions. I have improved the "Clear explanation" section in the paper, and removed some distracting typos too. (In addition, in the original post, the term: "∫ (√1-sin2θ/(√cos2θ) ) · cosθ dθ". should have been: "∫ (√1-sin2θ) · cosθdθ" instead, based on the problem in the video)
  5. No. Notice this preliminary Step (1): ∫ sin²ϴ · √1-sin²ϴ · cosϴ dϴ With my collapser, you can easily identify cosϴ from the initial substitution line; so instead of writing down the "√1-sin²ϴ" term, then finding cos²ϴ, then square rooting it, you go straight ahead and evaluate cosϴ in the integral. As a result, you don't need to look for cosϴ from 1-sin²ϴ in the identity table, and you don't need to square root. In the scenario above, three preliminary lines are replaced (excluding explicit multiplication) and in other problems, more preliminary lines may be replaced (also excluding explicit multiplication). Either way, you avoid searching the identity table to begin evaluation, and you avoid square rooting.
  6. Without TrigRuleCollapser: Let x = sinϴ ⟹ dx = cosϴ dϴ Preliminary Step (1): ∫ sin²ϴ · √1-sin²ϴ · cosϴ dϴ Preliminary Step (2): ∫ sin²ϴ · cosϴ · cosϴ dϴ Evaluation: ∫ sin²ϴ · cos²ϴ dϴ ... ... ****************** With TrigRuleCollapser: Let x = sinϴ ⟹ dx = cosϴ dϴ Evaluation: ∫ sin²ϴ · cos²ϴ dϴ ... ... (No preliminary steps required, you Evaluate rule: xn · dx/dϴ · dx)
  7. Good advice. I know it is excellent advice, because I had recently invented a framework for thought, that enforces heavy scientific scrutiny. I know how to isolate symbols and analyse them too, because I had invented some small degree of calculus in the past.
  8. No wonder Ai researchers are still in the regime of euclidean space instead of euclidean superspace. For example, here is yet another paper, concerning manifold learning - mean field theory, in the Riemannian geometry: https://arxiv.org/pdf/1606.05340v2.pdf The intriguing paper above is manifold resource I could learn from, in order to continue the supermanifold hypothesis in deep learning.
  9. Thanks for the helpful message and references. END-NOTE: Source a provided a spark for researching super-symmetry in a computational manner. Source b provides one of the richest resources for deep learning, while underlining manifolds, that bare non trivial relation to source a. While sources like a and b persist as rich sources of data usable for the task at hand, I detected that they alone would probably not suffice, given that considering supermanifolds beyond manifolds in deep learning, is as far as I could observe, novel waters. So, I know that it is likely quite necessary to study and experiment beyond the sources I presented.
  10. Based at least, on the contents of bengio's deep learning book, I am at least knowledgeable about a good portion of the symbols, (some used in relation to superspace as seen in the OP's paper)
  11. The formulation works for many classes of integrals whose problems constitute some square rooted distribution. Unfortunately, I don't know whether universal ways of collapsing are possible.
  12. Thanks for the supportive, considerate message. Yes, I at least know of the class of symmetry groups that are required. (Relating to the bosonic riccati) However, do you know anything about Montroll kinks, and the degrees of freedom it affords in variations of signal energy transfer in biological brains? FOOTNOTE: When I said learning the laws of physics in the third response above in this thread, in particular, I was referring to the supersymmetric structure, rather than my self, much like how deepmind's manifold based early concept learner infers laws of physics, based on the input space of pixels. Models that learn things better than humans, is typical in deep learning.
  13. The short answer: As I answered above, (and as the papers outline), the goal is to use a supermanifold structure, in a bellmanian like regime, much like how Googke Deepmind uses manifolds in their recent paper. The longer answer: At least, from ϕ(x;θ)Tw, or the machine learning paradigm: In the machine learning regime, something like the following applies: Jordan Bennett's answer to What is the Manifold Hypothesis in Deep Learning? FOOTNOTE: I don't know much about supermathematics at all, but based at least, on the generalizability of manifolds and supermanifolds, together with evidence that supersymmetry applies in cognitive science, I could formulate algebra with respect to the deep learning variant of manifolds. This means that given the nature of supermanifolds and manifolds, there is no law preventing ϕ(x;θ,)Tw, some structure in euclidean superspace that may subsume pˆdata (real valued training samples), over some temporal difference hyperplane.
  14. Machine learning models use some structure as their memory, in order to do representations based on some input space. Supermathematics may be used to represent some input space, given evidence that super-symmetry applies in cognitive science. Learning the laws of physics may be a crucial part of the aforementioned input space, or task. Pay attention to the segments below. [12] refers to: https://arxiv.org/abs/0705.1134
  15. This is a clear explanation w.r.t. the "Trigonometric Rule Collapser Set", that may perhaps be helpful. (See source) The above is not to be confused for u-substitution. (See why) In the sequence: x = sin t, where dx = cost dt, and 1 − x2 = 1 − sin2 t = cos2 t ....(from problem: ∫ √1- x2) ..the novel formulation dx | dt · dx occurs such that the default way of working trigonometric equations is compressed, permitting the reduction of the cardinality of steps normally employable. For example, in the video above, while evaluating ∫ √1- x2 dx, in a preliminary step, the instructor writes ∫ (√1-sin2θ/(√cos2θ) ) · cosθdθ. Using my trig collapser routine, this step (which may be a set of steps for other problems) is unnecessary, because applying my trig collapser set's novel form: dx | dθ · dx, we can just go right ahead and evaluate ∫cosθ·cosθdθ. The trigonometric rule collapser set may be a topic that may be an avenue which may spark further studies.
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.