# Supermathematics and Artificial General Intelligence

## Recommended Posts

This thread concerns attempts to construct artificial general intelligence, which I often underline may likely be mankind's last invention.

I am asking anybody that knows supermathematics and machine learning to pitch in the discussion below.

PART A
Back in 2016, I read somewhere that babies know some physics intuitively.
Also, it is empirically observable that babies use that intuition to develop abstractions of knowledge, in a reinforcement learning like manner.

PART B
Now, I knew beforehand of two types of major deep learning models, that:

(1) used reinforcement learning. (Deepmind Atari q)
(2) learn laws of physics. (Uetorch)

However:

(a) Object detectors like (2) use something called pooling to gain translation invariance over objects, so that the model learns regardless of where the object in the image is positioned
(b) Instead, (1) excludes pooling, because (1) requires translation variance, in order for Q learning to apply on the changing positions of the objects in pixels.

PART C
As I result I sought a model that could deliver both translation invariance and variance at the same time, and reasonably, part of the solution was models that disentangled factors of variation, i.e. manifold learning frameworks.

I didn't stop my scientific thinking at manifold learning though.

Given that cognitive science may be used to constrain machine learning models (similar to how firms like Deepmind often use cognitive science as a boundary on the deep learning models they produce) I sought to create a disentanglable model that was as constrained by cognitive science, as far as algebra would permit.

PART D

As a result I created something called the supermanifold hypothesis in deep learning (a component in another description called 'thought curvature').

This was due to evidence of supersymmetry in cognitive science; I compacted machine learning related algebra for disentangling, in the regime of supermanifolds. This could be seen as an extension of manifold learning in artificial intelligence.

Given that the supermanifold hypothesis compounds ϕ(x,,)Tw, here is an annotation of the hypothesis:

1. Deep Learning entails ϕ(x;)Tw, that denotes the input space x, and learnt representations .
2. Deep Learning underlines that coordinates or latent spaces in the manifold framework, are learnt features/representations, or directions that are sparse configurations of coordinates.
3. Supermathematics entails (x,,), that denotes some x valued coordinate distribution, and by extension, directions that compact coordinates via , .
4. As such, the aforesaid (x,,), is subject to coordinate transformation.
5. Thereafter 1, 2, 3, 4 and supersymmetry in cognitive science, within the generalizable nature of euclidean space, reasonably effectuates ϕ(x,,)Tw.

QUESTIONS:

Does anybody here have good knowledge of supermathematics or related field, to give any input on the above?

If so is it feasible to pursue the model I present in supermanifold hypothesis paper?

And if so, apart from the ones discussed in the paper, what type of pˆdata (training samples) do you garner warrants reasonable experiments in the regime of the model I presented?

Edited by ProgrammingGodJordan

##### Share on other sites
!

Moderator Note

Moved to mathematics

##### Share on other sites

I'm sorry but I really cannot see how you plan on connecting the dots/references etc.

Yes babies learn, yes everyone uses physics in some form or another.

What does that have to do with lie algebra?

##### Share on other sites
10 minutes ago, Mordred said:

I'm sorry but I really cannot see how you plan on connecting the dots/references etc.

Yes babies learn, yes everyone uses physics in some form or another.

What does that have to do with lie algebra?

Machine learning models use some structure as their memory, in order to do representations based on some input space.

Supermathematics may be used to represent some input space, given evidence that super-symmetry applies in cognitive science.

Learning the laws of physics may be a crucial part of the aforementioned input space, or task.

Pay attention to the segments below. [12] refers to: https://arxiv.org/abs/0705.1134

##### Share on other sites

You definetely need some serious work to actually demonstrate what your driving at.

What are you specifically wanting to program as a Deep learning format?

Your paper doesn't particularly clarify how you plan on deep learning Euclidean symmetry groups.

What is your focus? as quite frankly you are posting numerous related topics without any focus on any particular goal

Edited by Mordred

##### Share on other sites
42 minutes ago, Mordred said:

You definetely need some serious work to actually demonstrate what your driving at.

What are you specifically wanting to program as a Deep learning format?

Your paper doesn't particularly clarify how you plan on deep learning Euclidean symmetry groups.

What is your focus? as quite frankly you are posting numerous related topics without any focus on any particular goal

As I answered above, (and as the papers outline), the goal is to use a supermanifold structure, in a bellmanian like regime, much like how Googke Deepmind uses manifolds in their recent paper.

At least, from ϕ(x;θ)Tw, or the machine learning paradigm:

In the machine learning regime, something like the following applies:

FOOTNOTE:

I don't know much about supermathematics at all, but based at least, on the generalizability of manifolds and supermanifolds, together with evidence that supersymmetry applies in cognitive science, I could formulate algebra with respect to the deep learning variant of manifolds.

This means that given the nature of supermanifolds and manifolds, there is no law preventing ϕ(x;θ,)Tw, some structure in euclidean superspace that may subsume pˆdata (real valued training samples), over some temporal difference hyperplane.

##### Share on other sites

Ok well to start with your going to need to understand what those symmetry groups represent before you can even think about using them in any program let alone deep thinking algorithms/(programs etc).

I would start with vector symmetry of linear then angular momentum systems. This is extremely important to understand any symmetry let alone a supersymmetric group.

How much differential geometry have you got?

Assuming decent math skills I recommend Clifford algebra. It will stage you on Unitary and Orthogonal groups.

Here (this is preliminary for Lie algebra)

While I can help on group theory I can't directly on "Deep thinking" a brief reading up on it indicates it can use the group functions in its programming so the link above is still useful.

As far as programming is concerned my experience is in Ladder/Relay logic as per plant automation systems and Robotics. Which isn't going to help much here as the programs I've seen appear to be object oriented C++ programs. (though Fortran may also be useful ) if I recall one can include Fortran commands via Fortran.H

The good news is you don't need the physics to understand lie algebra (at least not for the group structures,rules,axioms etc) Thats part of the beauty behind symmetry groups...Lol truth be told understanding the groups allows a huge step to understanding any physics subject.

Edited by Mordred

##### Share on other sites
49 minutes ago, Mordred said:

Ok well to start with your going to need to understand what those symmetry groups represent before you can even think about using them in any program let alone deep thinking algorithms/(programs etc).

I would start with vector symmetry of linear then angular momentum systems. This is extremely important to understand any symmetry let alone a supersymmetric group.

How much differential geometry have you got?

Assuming decent math skills I recommend Clifford algebra. It will stage you on Unitary and Orthogonal groups.

Here (this is preliminary for Lie algebra)

While I can help on group theory I can't directly on "Deep thinking" a brief reading up on it indicates it can use the group functions in its programming so the link above is still useful.

As far as programming is concerned my experience is in Ladder/Relay logic as per plant automation systems and Robotics. Which isn't going to help much here as the programs I've seen appear to be object oriented C++ programs.

The good news is you don't need the physics to understand lie algebra (at least not for the group structures,rules,axioms etc) Thats part of the beauty behind symmetry groups...Lol truth be told understanding the groups allows a huge step to understanding any physics subject.

Thanks for the supportive, considerate message.

Yes, I at least know of the class of symmetry groups that are required. (Relating to the bosonic riccati)

However, do you know anything about Montroll kinks, and the degrees of freedom it affords in variations of signal energy transfer in biological brains?

FOOTNOTE:

When I said learning the laws of physics in the third response above in this thread, in particular, I was referring to the supersymmetric structure, rather than my self, much like how deepmind's manifold based early concept learner infers laws of physics, based on the input space of pixels.

Models that learn things better than humans, is typical in deep learning.

Edited by ProgrammingGodJordan

##### Share on other sites
22 hours ago, Mordred said:

cOk well to start with your going to need to understand what those symmetry groups represent before you can even think about using them in any program let alone deep thinking algorithms/(programs etc).

I would start with vector symmetry of linear then angular momentum systems. This is extremely important to understand any symmetry let alone a supersymmetric group.

How much differential geometry have you got?

Assuming decent math skills I recommend Clifford algebra. It will stage you on Unitary and Orthogonal groups.

Here (this is preliminary for Lie algebra)

While I can help on group theory I can't directly on "Deep thinking" a brief reading up on it indicates it can use the group functions in its programming so the link above is still useful.

At this stage were not worried about the actual groups but you do need to recognize the mathematical structures and symbology.

lets demonstrate. First Deep thinking is a subset of machine learning.

We use the function $h(\mathbf{x},\theta)$ where $\mathbf{x}$ is a vector whose vector components is a greyscale  that intensifies at each pixel.

Lets predict the number of apples grown with the number of days of rain. Let x_1 be the number of apples, x_2 the number of days of rain.

So at each datapoint  $\mathbf{x}=[x_1,x_2]^T$ our goal is to have a learning model

$h(\mathbf{x},\theta)$ where parameter vector $\theta=[\theta_0,\theta_1,\theta_2]^2$

such that

$h(\mathbf{x},\theta)=+1$ if $x^T\bullet$$\begin{bmatrix}\theta_1\\\theta_2\end{bmatrix}$$+\theta_0< 0$

$h(\mathbf{x},\theta)=-1$ if $x^T\bullet$$\begin{bmatrix}\theta_1\\\theta_2\end{bmatrix}$$+\theta_0\ge 0$

Edited by Mordred

##### Share on other sites
48 minutes ago, Mordred said:

At this stage were not worried about the actual groups but you do need to recognize the mathematical structures and symbology.

lets demonstrate. First Deep thinking is a subset of machine learning.

We use the function h(x,θ) where x is a vector whose vector components is a greyscale  that intensifies at each pixel.

Lets predict the number of apples grown with the number of days of rain. Let x_1 be the number of apples, x_2 the number of days of rain.

So at each datapoint  x=[x1,x2]T our goal is to have a learning model

h(x,θ) where parameter vector θ=[θ0,θ1,θ2]2

such that

Missing argument for \binom Missing argument for \begin

(12)

Based at least, on the contents of bengio's deep learning book, I am at least knowledgeable about a good portion of the symbols, (some used in relation to superspace as seen in the OP's paper)

Edited by ProgrammingGodJordan

##### Share on other sites

Could not get the bmatrix working above but have theta 1 above theta 2 for some reason I could not use theta under bmatrix.

Anyways you need the above function for deep thinking. That will be up to you not I lol.

As far as symmetry groups are concerned the references you mentioned simply will not cut it unless you also understand lie algebra.

Not for what your looking for ie symmetric vs supersymmetric under physics.

(these are different particle groups) example SO(10)MSSM vs SO(10)MSM you need the subgroups etc which the Clifford algebra is the preliminary.

Well here you can see for yourself

1 hour ago, ProgrammingGodJordan said:

Thanks for the supportive, considerate message.

Yes, I at least know of the class of symmetry groups that are required. (Relating to the bosonic riccati)

However, do you know anything about Montroll kinks, and the degrees of freedom it affords in variations of signal energy transfer in biological brains?

Directly no, but the mathematics behind that paper is QFT treatments. Which I do understand you may want to study Hamiltons which is "action"

Edited by Mordred

##### Share on other sites
4 hours ago, Mordred said:

Could not get the bmatrix working above but have theta 1 above theta 2 for some reason I could not use theta under bmatrix.

Anyways you need the above function for deep thinking. That will be up to you not I lol.

As far as symmetry groups are concerned the references you mentioned simply will not cut it unless you also understand lie algebra.

Not for what your looking for ie symmetric vs supersymmetric under physics.

(these are different particle groups) example SO(10)MSSM vs SO(10)MSM you need the subgroups etc which the Clifford algebra is the preliminary.

Well here you can see for yourself

Directly no, but the mathematics behind that paper is QFT treatments. Which I do understand you may want to study Hamiltons which is "action"

Thanks for the helpful message and references.

END-NOTE:

Source a provided a spark for researching super-symmetry in a computational manner.

Source b provides one of the richest resources for deep learning, while underlining manifolds, that bare non trivial relation to source a.

While sources like a and b persist as rich sources of data usable for the task at hand, I detected that they alone would probably not suffice, given that considering supermanifolds beyond manifolds in deep learning, is as far as I could observe, novel waters.

So, I know that it is likely quite necessary to study and experiment beyond the sources I presented.

Edited by ProgrammingGodJordan

##### Share on other sites

Supersymmetric groups are incredibly complex for any computer regardless of how it is programmed.

So yes you do have your work cut out for you

##### Share on other sites
34 minutes ago, Mordred said:

Supersymmetric groups are incredibly complex for any computer regardless of how it is programmed.

So yes you do have your work cut out for you

No wonder Ai researchers are still in the regime of euclidean space instead of euclidean superspace.

For example, here is yet another paper, concerning manifold learning - mean field theory, in the Riemannian geometry: https://arxiv.org/pdf/1606.05340v2.pdf

The intriguing paper above is manifold resource I could learn from, in order to continue the supermanifold hypothesis in deep learning.

Edited by ProgrammingGodJordan

##### Share on other sites

Good paper, a proper understanding of every line and formula could take a month. Unless your already familiar with every formula and terminology.

This is the trick, everytime you see a term or formula you don't recognize.

Stop and research that line, formula or terminology.

For example you read as per the Euler Langrene. or Hamilton. Stop and study those particular topics.

You would be amazed how much detail is in every single sentence.

Edited by Mordred

##### Share on other sites

fixed the above bmatrix issue I was having.  LOL the \dot was interfering so switched to \bullet

13 hours ago, ProgrammingGodJordan said:

For example, here is yet another paper, concerning manifold learning - mean field theory, in the Riemannian geometry: https://arxiv.org/pdf/1606.05340v2.pdf

equation 4 of the above paper when it refers to a

2x2 matrix of inner products is describing a unitary group U(2).

Just an assist. Inner products are linear.

Edited by Mordred

##### Share on other sites
12 hours ago, Mordred said:

Good paper, a proper understanding of every line and formula could take a month. Unless your already familiar with every formula and terminology.

This is the trick, everytime you see a term or formula you don't recognize.

Stop and research that line, formula or terminology.

For example you read as per the Euler Langrene. or Hamilton. Stop and study those particular topics.

You would be amazed how much detail is in every single sentence.

I know it is excellent advice, because I had recently invented a framework for thought, that enforces heavy scientific scrutiny.

I know how to isolate symbols and analyse them too, because I had invented some small degree of calculus in the past.

##### Share on other sites

"Supermathematics and Artificial Intelligence"???? hmmm

I couldn't find any paper at academia links, but looking at the OP's profile I see a blog url with papers discussing the same things above.

The title of the paper on the blog reads like something sounding like the crack pot work "time-cube".

Quickly scrolling down on the first paper, you then see an "informal proof section".

It is actually straightforward, and the OP's "Supersymmetric Artificial Neural Network" formulation may actually hold some water.

A screenshot of the portion I am talking about:

Some advice to OP if he sees this:

(1) Remove the colors from your papers. (Colors are indication of crack pot work)

So the paper looks like it could hold some water.

My 10 cents:

If the OP can actually implement a toy example that learns "supersymmetric weights" on a simple dataset like mnist, from my understanding of machine learning, these types of "supersymmetric neural nets" could become a part of the literature.