arXiv Neural Computation4d ago|研究・論文プロダクト・サービス

Dynamical Stability for Dense Patterns in Discrete Attractor Neural Networks

This research paper presents a new theory for analyzing the dynamical stability of discrete fixed points in a broad class of neural networks with graded neural activities and noise.

💡

Why it matters

This work advances the theoretical understanding of dynamical stability in discrete attractor neural networks, which are important models of biological memory.

Key Points

  • 1Derives a theory for local stability of discrete fixed points in neural networks
  • 2Analyzes the Jacobian spectrum to determine a critical load distinct from classical capacity
  • 3Shows benefits of threshold-linear activation and sparse-like neural activity patterns

Details

The paper focuses on neural networks that store multiple discrete attractors, which are canonical models of biological memory. Previous work could only guarantee dynamical stability under highly restrictive conditions. This new research directly analyzes the bulk and outliers of the Jacobian spectrum to show that all fixed points are stable below a critical load. This critical load depends on the statistics of neural activities in the fixed points and the single-neuron activation function. The analysis highlights the computational advantages of using threshold-linear activation functions and sparse-like neural activity patterns.

Like
Save
Read original
Cached
Comments
?

No comments yet

Be the first to comment

AI Curator - Daily AI News Curation

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies