Abstract
Artificial neural networks are inspired by the functioning of the brain but differ in several key aspects. In biological neural networks, information is encoded in the spiking times of neurons. In this survey talk, we first address the expressiveness of spiking neural networks and derive a universal representation theorem.
Furthermore, it is implausible that biological learning is based on gradient descent. This has prompted researchers to propose various biologically inspired learning procedures. However, these methods lack a solid theoretical foundation. While statistical theory for artificial neural networks has been developed over the past years, the aim now is to extend this theory to biological neural networks, as the future of AI is likely to draw even more inspiration from biology. We will explore the challenges and present some recent theoretical results.
Joint work with Niklas Dexheimer, Sascha Gaudlitz, Shayan Hundrieser, Insung Kong, and Philipp Tuchel.