Modular Connectivity Emerges from Noise-Motivated Regularization
This paper explores how modular architectures in neural networks can emerge from Poisson noise-motivated regularization, leading to improved robustness and compositional generalization.
Why it matters
This work provides a novel approach to encourage the emergence of modular architectures in neural networks, which can lead to improved robustness and generalization capabilities.
Key Points
- 1Modular architectures in the brain enable complex task factorization and compositional generalization
- 2Artificial neural networks (ANNs) often lack modular structure, making it difficult to find such solutions
- 3Noise-driven modularization can be achieved through a deterministic regularizer that combines weights and activations
- 4Pre-modularized ANNs exhibit superior noise-robustness and generalization beyond training data
Details
The paper draws inspiration from fault-tolerant computation and the Poisson-like firing of real neurons to show that activity-dependent neural noise, combined with nonlinear neural responses, can drive the emergence of modular solutions that reflect an accurate understanding of modular tasks. The authors find that this noise-driven modularization can be recapitulated by a deterministic regularizer that multiplicatively combines weights and activations, revealing rich phenomenology not captured in linear networks or by standard regularization methods. While the emergence of modular structure requires sufficiently many training samples (exponential in the number of modular task dimensions), the authors demonstrate that pre-modularized ANNs exhibit superior noise-robustness and the ability to generalize and extrapolate well beyond training data, compared to ANNs without such inductive biases.
No comments yet
Be the first to comment