 # ProgrammingGodJordan

Senior Members

158

• #### Last visited

• Birthday 04/17/1991

## Contact Methods

• Website URL
https://jordanmicahbennett.github.io/Supermathematics-and-Artificial-General-Intelligence/

## Profile Information

• Location
Jamaica
• Favorite Area of Science
Artificial Intelligence

## Recent Profile Visitors

3516 profile views

## ProgrammingGodJordan's Achievements -18

### Reputation

2. As Max Tegmark expressed in a youtube video here, physicists have long neglected to define the observer in much of the equations. (The observer being the intelligent agent) Perhaps consciousness may be defined in terms of very complex equations, from disciplines, like Physics; as an example, degrees of general structures such as manifolds central to physics and mathematics, are now quite prevalent in the study of Deep Learning.
3. Your talk reminds me of Deepak Chopra. Chopra quote: "overthrowing the climactic overthrow of the superstition of materialism". Advice: Try not to sound like Chopra.
4. Yes, you can use memory to look on the merely three standard trig rule collapser forms. (Just like the memory you could use to memorize the many many more standard trig identities) So, using my collapser is still cheaper than looking up the many more trig identities. You gain shorter evaluations, and you also compute with far less lookup. FOOTNOTE: "Uncool", thanks for your questions. I have improved the "Clear explanation" section in the paper, and removed some distracting typos too. (In addition, in the original post, the term: "∫ (√1-sin2θ/(√cos2θ) ) · cosθ dθ". should have been: "∫ (√1-sin2θ) · cosθdθ" instead, based on the problem in the video)
5. No. Notice this preliminary Step (1): ∫ sin²ϴ · √1-sin²ϴ · cosϴ dϴ With my collapser, you can easily identify cosϴ from the initial substitution line; so instead of writing down the "√1-sin²ϴ" term, then finding cos²ϴ, then square rooting it, you go straight ahead and evaluate cosϴ in the integral. As a result, you don't need to look for cosϴ from 1-sin²ϴ in the identity table, and you don't need to square root. In the scenario above, three preliminary lines are replaced (excluding explicit multiplication) and in other problems, more preliminary lines may be replaced (also excluding explicit multiplication). Either way, you avoid searching the identity table to begin evaluation, and you avoid square rooting.
6. Without TrigRuleCollapser: Let x = sinϴ ⟹ dx = cosϴ dϴ Preliminary Step (1): ∫ sin²ϴ · √1-sin²ϴ · cosϴ dϴ Preliminary Step (2): ∫ sin²ϴ · cosϴ · cosϴ dϴ Evaluation: ∫ sin²ϴ · cos²ϴ dϴ ... ... ****************** With TrigRuleCollapser: Let x = sinϴ ⟹ dx = cosϴ dϴ Evaluation: ∫ sin²ϴ · cos²ϴ dϴ ... ... (No preliminary steps required, you Evaluate rule: xn · dx/dϴ · dx)
7. Good advice. I know it is excellent advice, because I had recently invented a framework for thought, that enforces heavy scientific scrutiny. I know how to isolate symbols and analyse them too, because I had invented some small degree of calculus in the past.
8. No wonder Ai researchers are still in the regime of euclidean space instead of euclidean superspace. For example, here is yet another paper, concerning manifold learning - mean field theory, in the Riemannian geometry: https://arxiv.org/pdf/1606.05340v2.pdf The intriguing paper above is manifold resource I could learn from, in order to continue the supermanifold hypothesis in deep learning.
9. Thanks for the helpful message and references. END-NOTE: Source a provided a spark for researching super-symmetry in a computational manner. Source b provides one of the richest resources for deep learning, while underlining manifolds, that bare non trivial relation to source a. While sources like a and b persist as rich sources of data usable for the task at hand, I detected that they alone would probably not suffice, given that considering supermanifolds beyond manifolds in deep learning, is as far as I could observe, novel waters. So, I know that it is likely quite necessary to study and experiment beyond the sources I presented.
10. Based at least, on the contents of bengio's deep learning book, I am at least knowledgeable about a good portion of the symbols, (some used in relation to superspace as seen in the OP's paper)
11. The formulation works for many classes of integrals whose problems constitute some square rooted distribution. Unfortunately, I don't know whether universal ways of collapsing are possible.
12. Thanks for the supportive, considerate message. Yes, I at least know of the class of symmetry groups that are required. (Relating to the bosonic riccati) However, do you know anything about Montroll kinks, and the degrees of freedom it affords in variations of signal energy transfer in biological brains? FOOTNOTE: When I said learning the laws of physics in the third response above in this thread, in particular, I was referring to the supersymmetric structure, rather than my self, much like how deepmind's manifold based early concept learner infers laws of physics, based on the input space of pixels. Models that learn things better than humans, is typical in deep learning.
13. The short answer: As I answered above, (and as the papers outline), the goal is to use a supermanifold structure, in a bellmanian like regime, much like how Googke Deepmind uses manifolds in their recent paper. The longer answer: At least, from ϕ(x;θ)Tw, or the machine learning paradigm: In the machine learning regime, something like the following applies: Jordan Bennett's answer to What is the Manifold Hypothesis in Deep Learning? FOOTNOTE: I don't know much about supermathematics at all, but based at least, on the generalizability of manifolds and supermanifolds, together with evidence that supersymmetry applies in cognitive science, I could formulate algebra with respect to the deep learning variant of manifolds. This means that given the nature of supermanifolds and manifolds, there is no law preventing ϕ(x;θ,)Tw, some structure in euclidean superspace that may subsume pˆdata (real valued training samples), over some temporal difference hyperplane.
14. Machine learning models use some structure as their memory, in order to do representations based on some input space. Supermathematics may be used to represent some input space, given evidence that super-symmetry applies in cognitive science. Learning the laws of physics may be a crucial part of the aforementioned input space, or task. Pay attention to the segments below.  refers to: https://arxiv.org/abs/0705.1134
15. This is a clear explanation w.r.t. the "Trigonometric Rule Collapser Set", that may perhaps be helpful. (See source) The above is not to be confused for u-substitution. (See why) In the sequence: x = sin t, where dx = cost dt, and 1 − x2 = 1 − sin2 t = cos2 t ....(from problem: ∫ √1- x2) ..the novel formulation dx | dt · dx occurs such that the default way of working trigonometric equations is compressed, permitting the reduction of the cardinality of steps normally employable. For example, in the video above, while evaluating ∫ √1- x2 dx, in a preliminary step, the instructor writes ∫ (√1-sin2θ/(√cos2θ) ) · cosθdθ. Using my trig collapser routine, this step (which may be a set of steps for other problems) is unnecessary, because applying my trig collapser set's novel form: dx | dθ · dx, we can just go right ahead and evaluate ∫cosθ·cosθdθ. The trigonometric rule collapser set may be a topic that may be an avenue which may spark further studies.
×