Source
Neurocomputing
DATE OF PUBLICATION
02/14/2025
Authors
Share

Fast gradient-free activation maximization for neurons in spiking neural networks

Abstract

Neural networks (NNs), both living and artificial, work due to being complex systems of neurons,each having its own specialization. Revealing these specializations is important for understandingNNs’ inner working mechanisms. The only way to do this for a living system, the neural responseof which to a stimulus is not a known (let alone differentiable) function is to build a feedback loopof exposing it to stimuli, the properties of which can be iteratively varied aiming in the directionof maximal response. To test such a loop on a living network, one should first learn how to run itquickly and efficiently, reaching most effective stimuli (ones that maximize certain neurons’ activation) in least possible number of iterations. We present a framework with an effective design of sucha loop, successfully testing it on an artificial spiking neural network (SNN, a model that mimics thebehaviour of NNs in living brains). Our optimization method used for activation maximization(AM) was based on low-rank tensor decomposition (Tensor Train, TT) of the activation function’sdiscretization over its domain – the latent parameter space of stimuli (CIFAR10-size color images,generated by either VQ-VAE or SN-GAN from their latent description vectors, fed to the SNN).To our knowledge, the present work is the first attempt to perform effective AM for SNNs. Thesource code of our framework, MANGO (for Maximization of neural Activation via Non-GradientOptimization) is available on GitHub

Join AIRI