Jump to content

Featured Replies

Just... what? And when?

I read an article some time ago about a Chinese AI based on spiking neural networks that requires ~90% less computational resources but it's an early stage model and I don't remember where the article was published. I also cannot comment on the merits of spiking neural networks themselves as my knowledge of comp sci is modest in general.

4 minutes ago, Otto Kretschmer said:

Just... what? And when?

I read an article some time ago about a Chinese AI based on spiking neural networks that requires ~90% less computational resources but it's an early stage model and I don't remember where the article was published. I also cannot comment on the merits of spiking neural networks themselves as my knowledge of comp sci is modest in general.

Isn't that Deep Seek? https://en.wikipedia.org/wiki/DeepSeek

I thought it was rather funny the way this cheap'n'cheerful LLM put the cat among the pigeons in Silicon Valley. One in the eye for Sam Alt-Right and the other AI hypemeisters😁.

But I probably know even less about this technology than you do. For instance I don't know what "spiking neural networks" means.

  • Author
10 minutes ago, exchemist said:

Isn't that Deep Seek? https://en.wikipedia.org/wiki/DeepSeek

I thought it was rather funny the way this cheap'n'cheerful LLM put the cat among the pigeons in Silicon Valley. One in the eye for Sam Alt-Right and the other AI hypemeisters😁.

But I probably know even less about this technology than you do. For instance I don't know what "spiking neural networks" means.

No, DeepSeek uses regular transformer architecture with a few innovative things, mixture of experts (MoE) and multi head latent attention (MLA) but it's a normal LLM.

I mostly use Gemini 2.5 Pro in Google AI Studio and Alibaba's Qwen series. Latest Qwen3 VL is at 2.5 Pro level.

EDIT: The model was called SpikingBrain.

Edited by Otto Kretschmer

What do you mean by replace? The evolution of AI or the “next big thing” that tech will try to sell us as being must-have but never lives up to its billing, like blockchain, NFTs and the internet-of-things?

What will replace llms?

More and more sophisticated llms and then suddenly an, impossible to predict, emergent property changes the title...

  • Author
10 minutes ago, dimreepr said:

What will replace llms?

More and more sophisticated llms and then suddenly an, impossible to predict, emergent property changes the title...

  1. What makes you think that AI produces exclusively slop?

  2. If AI produces exclusively slop, what makes you think it'll remain like that forever? It's a lot like expecting computers to remain frozen at ENIAC level forever, quite irrational isn't it? The entire anti AI narrative is repeated ad nauseam but hardly anybody provides empirical arguments as to why is pregress just supposed to stop suddenly.

Edited by Otto Kretschmer

Just now, Otto Kretschmer said:
  1. What makes you think that AI produces exclusively slop?

  2. If AI produces exclusively slop, what makes tou think it'll remain so forever? It's a lot like expecting computers to remain frozen at ENIAC level forever, quite irrational isn't it?

I don't think that, what makes you think I do?

I think the most useful product of our current iteration of llms is in trying to decipher the language of our fellow animal's, an emergent quality of that is anyones guess...

  • Author
3 minutes ago, dimreepr said:

I don't think that, what makes you think I do?

I think the most useful product of our current iteration of llms is in trying to decipher the language of our fellow animal's, an emergent quality of that is anyones guess...

Lol I quoted a wrong person. I was supposed to quote @swansont's post, not yours. -p

@swansont

Chatbots are far from the most important thing in AI. There is this stuff for example: https://deepmind.google/science/alphafold/

Edited by Otto Kretschmer

2 hours ago, Otto Kretschmer said:
  1. What makes you think that AI produces exclusively slop?

  2. If AI produces exclusively slop, what makes you think it'll remain like that forever? It's a lot like expecting computers to remain frozen at ENIAC level forever, quite irrational isn't it? The entire anti AI narrative is repeated ad nauseam but hardly anybody provides empirical arguments as to why is pregress just supposed to stop suddenly.

Nobody says LLMs produce exclusively slop, nor do they assert there will be no improvement in their quality. The scepticism from people like me arises from firstly the hype around them and secondly the demonstrably baleful effect they currently have on unsuspecting people* and on social media.

The Financial Times reported this week the results of a survey they did of business take-up of AI. Turns out that although businesses trumpet their take-up, almost none of them can point to any resulting improvements in their business. So it looks as if they are doing it due to FOMO (fear of missing out) rather than because of any real, substantial application. They are behaving like sheep and following the trend, in other words. The Sam Alt-Rights of this world love this and feed the hype, as it makes their stock price go up, but it’s riding for a nasty fall.

No doubt these things will get better, but it looks to me as if we have to go through another dotcom bubble experience of boom and bust, before businesses and AI designers get more realistic about their true scope of application.

And on the consumer level, we badly need guardrails to stop people becoming addicts of LLMs and to prevent their misuse to spread disinformation.

*Just look at poor @Prajna . He thinks he is in a relationship with a chatbot called Jyoti 🤪.

Edited by exchemist

2 hours ago, Otto Kretschmer said:

Lol I quoted a wrong person. I was supposed to quote @swansont's post, not yours. -p

@swansont

Chatbots are far from the most important thing in AI. There is this stuff for example: https://deepmind.google/science/alphafold/

I didn’t say anything about chatbots, and you didn’t provide the clarification I asked for.

Alphafold is not an LLM, so I’m not sure why you brought it up

Am familiar with SNN. In short, they more emulate human neuronal communication, and continue the trend towards more bio mimicry in AI architectures. Natural selection in difficult environments has made biological NNs remarkably efficient and powerful - no surprise that AI research is heading in the biological direction. They're incorporating other bio-inspired neuron dynamics like SFA, spike frequency adaptation (if you want to wade deeper into this stuff). Given the horrific power usage of conventional ANNs, I would think SNNs are going to be inevitable with their considerable energy savings.

I recall when all the talk started up about computers would need to become more analog in their neural dynamics to get all those bio inspired goodies. The SNN is taking the field more in that direction. For instance the "leaky integrate-and-fire" model, which is a common SNN neuron model, describes a continuous, time-dependent membrane potential. This potential builds up from incoming spikes and decays over time i.e. it leaks...that is analog behavior.

19 hours ago, TheVat said:

Am familiar with SNN. In short, they more emulate human neuronal communication, and continue the trend towards more bio mimicry in AI architectures. Natural selection in difficult environments has made biological NNs remarkably efficient and powerful - no surprise that AI research is heading in the biological direction. They're incorporating other bio-inspired neuron dynamics like SFA, spike frequency adaptation (if you want to wade deeper into this stuff). Given the horrific power usage of conventional ANNs, I would think SNNs are going to be inevitable with their considerable energy savings.

I recall when all the talk started up about computers would need to become more analog in their neural dynamics to get all those bio inspired goodies. The SNN is taking the field more in that direction. For instance the "leaky integrate-and-fire" model, which is a common SNN neuron model, describes a continuous, time-dependent membrane potential. This potential builds up from incoming spikes and decays over time i.e. it leaks...that is analog behavior.

Indeed, digital will always be limited to an absolute at some stage.

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.