Jump to content

thoughtfuhk

Senior Members
  • Posts

    108
  • Joined

  • Last visited

Everything posted by thoughtfuhk

  1. "Supermathematics and Artificial Intelligence"???? hmmm I couldn't find any paper at academia links, but looking at the OP's profile I see a blog url with papers discussing the same things above. The title of the paper on the blog reads like something sounding like the crack pot work "time-cube". Quickly scrolling down on the first paper, you then see an "informal proof section". It is actually straightforward, and the OP's "Supersymmetric Artificial Neural Network" formulation may actually hold some water. A screenshot of the portion I am talking about: Some advice to OP if he sees this: (1) Remove the colors from your papers. (Colors are indication of crack pot work) (2) Remove the links from your papers. (Far too many links) So the paper looks like it could hold some water. My 10 cents: If the OP can actually implement a toy example that learns "supersymmetric weights" on a simple dataset like mnist, from my understanding of machine learning, these types of "supersymmetric neural nets" could become a part of the literature.
  2. But, but by limiting your awareness of other attempted solutions, don't you run the risk of making some of those previous mistakes? Wouldn't you avoid running the above risk, by simply being aware of those past mistakes?
  3. Don't things like Dwave quantum computing solve these hard problems?
  4. I couldn't find the paper in your OP, but I did find something on "research gate" at this location with the same title: https://www.researchgate.net/publication/318722013_Trigonometric_Rule_Collapser_Set_qua_Calculus It just looks like you're plugging in values from basic u-substitution! Edit 1: I re-read it once more. It's not u-substitution after all. It reminds me of it, but it's actually far from u-substitution. My bad. Edit 2: I am not much of a mathematician, but I've passed Calculus I and II. From what I can see, I can't find it in any of the 3 2016 Calculus texts I own. I will look online and report if I find it.
  5. Hey trying to teach myself physics and machine learning. This concerns the paper mentioned in the title. My 2 cents on the paper: Essentially the model is predicting the future, but instead of pixel space latent space. Consciousness is represented as some "hidden state" of a recurrent neural network. This "hidden state" is about generative modelling, because generative modelling has large capacity to capture low level embeddings. But it really just seems like an attention mechanism. Anybody here know machine learning can verify?
  6. Now teaching myself physics too. I find that this helps:
  7. Technically, hamlet is a modern ape, so i guess the answer is however long hamlet took to write it Humans are modern apes...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.