Four years ago, UC Santa Cruz’s Jason Eshraghian developed a Python library that combines neuroscience with artificial intelligence to create spiking neural networks, a machine learning method that takes inspiration from the brain’s ability to efficiently process data. Now, his open source code library, called “snnTorch,” has surpassed 100,000 downloads and is used in a wide variety of projects, from NASA satellite tracking efforts to semiconductor companies optimizing chips for AI.
A new paper published in the journal Proceedings of the IEEE documents the coding library but also is intended to be a candid educational resource for students and any other programmers interested in learning about brain-inspired AI.
“It’s exciting because it shows people are interested in the brain, and that people have identified that neural networks are really inefficient compared to the brain,” said Eshraghian, an assistant professor of electrical and computer engineering. “People are concerned about the environmental impact [of the costly power demands] of neural networks and large language models, and so this is a very plausible direction forward.”
Building snnTorch
Spiking neural networks emulate the brain and biological systems to process information more efficiently. The brain’s neurons are at rest until there is a piece of information for them to process, which causes their activity to spike. Similarly, a spiking neural network only begins processing data when there is an input into the system, rather than constantly processing data like traditional neural networks.
“We want to take all the benefits of the brain and its power efficiency and smush them into the functionality of artificial intelligence – so taking the best of both worlds,” Eshraghian said.
Eshraghian began building the code for a spiking neural network in Python as a passion project during the pandemic, somewhat as a method to teach himself the coding language Python. A chip designer by training, he became interested in learning to code when considering that computing chips could be optimized for power efficiency by co-designing the software and the hardware to ensure they best complement each other.
Now, snnTorch is being used by thousands of programmers around the world on a variety of projects, supporting everything from NASA’s satellite tracking projects to major chip designers such as Graphcore.
While building the Python library, Eshraghian created code documentation and educational materials, which came naturally to him in the process of teaching himself the coding language. The documents, tutorials, and interactive coding notebooks he made later exploded in the community and became the first point of entry for many people learning about the topics of neuromorphic engineering and spiking neural networks, which he sees as one of the major reasons that his library became so popular.
An honest resource
Knowing that these educational materials could be very valuable to the growing community of computer scientists and beyond who were interested in the field, Eshraghian began compiling his extensive documentation into a paper, which has now been published in the Proceedings of the IEEE, a leading computing journal.
The paper acts as a companion to the snnTorch code library and is structured like a tutorial, and an opinionated one at that, discussing uncertainty among brain-inspired deep learning researchers and offering a perspective on the future of the field. Eshraghian said that the paper is intentionally upfront to its readers that the field of neuromorphic computing is evolving and unsettled in an effort to save students the frustration of trying to find the theoretical basis for code decision-making that the research community doesn’t even understand.
“This paper is painfully honest, because students deserve that,” Eshraghian said. “There’s a lot of things that we do in deep learning, and we just don't know why they work. A lot of times we want to claim that we did something intentionally, and we published because we went through a series of rigorous experiments, but here we say just: this is what works best and we have no idea why.”
The paper contains blocks of code, a format unusual to typical research papers. These code blocks are sometimes accompanied by explanations that certain areas may be vastly unsettled, but provide insight into why researchers think certain approaches may be successful. Eshraghian said he has seen a positive reception to this honest approach in the community, and has even been told that the paper is being used in onboarding materials at neuromorphic hardware startups.
“I don't want my research to put people through the same pain I went through,” he said.
Learning from and about the brain
The paper offers a perspective on how researchers in the field might navigate some of the limitations of brain-inspired deep learning that stem from the fact that overall, our understanding of how the brain functions and processes information is quite limited.
For AI researchers to move toward more brain-like learning mechanisms for their deep learning models, they need to identify the correlations and discrepancies between deep learning and biology, Eshraghian said. One of these key differences is that brains can’t survey all of the data they’ve ever inputted in the way that AI models can, and instead focus on the real-time data that comes their way, which could offer opportunities for enhanced energy efficiency.
“Brains aren't time machines, they can't go back — all your memories are pushed forward as you experience the world, so training and processing are coupled together,” Eshraghian said. “One of the things that I make a big deal of in the paper is how we can apply learning in real time.”
Another area of exploration in the paper is a fundamental concept in neuroscience that states that neurons that fire together are wired together – meaning when two neurons are triggered to send out a signal at the same time, the pathway between the two neurons is strengthened. However, the ways in which the brain learns on an organ-wide scale still remains mysterious.
The “fire together, wired together" concept has been traditionally seen as in opposition to deep learning’s model training method known as backpropagation, but Eshraghian suggests that these processes may be complementary, opening up new areas of exploration for the field.
Eshraghian is also excited about working with cerebral organoids, which are models of brain tissue grown from stem cells, to learn more about how the brain processes information. He’s currently collaborating with biomolecular engineering researchers in the UCSC Genomics Institute’s Braingeneers group to explore these questions with organoid models. This is a unique opportunity for UC Santa Cruz engineers to incorporate “wetware” – a term referring to biological models for computing research — into the software/hardware co-design paradigm that is prevalent in the field. The snnTorch code could even provide a platform for simulating organoids, which can be difficult to maintain in the lab.
“[The Braingeneers] are building the biological instruments and tools that we can use to get a better feel for how learning can happen, and how that might translate in order to make deep learning more efficient,” Eshraghian said.
Brain-inspired learning at UCSC and beyond
Eshraghian is now using the concepts developed in his library and the recent paper in his class on neuromorphic computing at UC Santa Cruz called “Brain-Inspired Deep Learning.” Undergraduate and graduate students across a range of academic disciplines are taking the class to learn the basics of deep learning and complete a project in which they write their own tutorial for, and potentially contributing to, snnTorch.
“It's not just kind of coming out of the class with an exam or getting an A plus, it's now making a contribution to something, and being able to say that you've done something tangible,” Eshraghian said.
Meanwhile, the preprint version of the recent IEEE paper continues to receive contributions from researchers around the world, a reflection of the dynamic, open-source nature of the field. A new NSF grant he is a co-principal investigator on will support students’ ability to attend the month-long Telluride Neuromorphic & Cognition Engineering workshop.
Eshraghian is collaborating with people to push the field in a number of ways, from making biological discoveries about the brain, to pushing the limits of neuromorphic chips to handle low-power AI workloads, to facilitating collaboration to bring the spiking neural network-style of computing to other domains such as natural physics.
Discord and Slack channels dedicated to discussing the spiking neural network code support a thriving environment of collaboration across industry and academia. Eshraghian even recently came across a job posting that listed proficiency in snnTorch as a desired quality.