At Parsa Research Laboratory (PRL) we are developing full stack novel brain-inspired paradigms for accurate, energy efficient, fast, and reliable intelligence at the edge. These span from algorithm-hardware codesign to physics-informed neuromorphic computing, distributed learning, and safe lifelong learning.
Title: DEJA-VU: Design of Joint 3D Solid-State Learning Machines for Various Cognitive Use-Cases
Sponsor: National Science Foundation
Support: $2,400,000 for three years (Oct. 2023 - Sept. 2026), PI-Parsa share: $550,000
Collaborators:
Prof. Akhilesh Jaiswal, University of Wisconsin Madison
Prof. Babak Shahbaba, University of California Irvine
Prof. Norbert Fortin, University of California Irvine
Goal: In this multi-university, cross-disciplinary team we design and develop new class of computer chips (Cognitive Computing Machines or C2M) inspired from the cognitive functions of the hippocampus.
Title: Multi-Phase Vector Symbolic Architectures for Distributed and Collective Intelligence in Multi-Agent Autonomous Systems (single-PI)
Sponsor: U.S. Army Ground Vehicle Systems Center (GVSC), Automotive Research Center (ARC)
Support: $254,000 (Jan. 2023 - Dec. 2024)
Quad Members:
Dr. David Gorsich, Chief Scientist, U.S. Army Combat Capabilities Development Center (DEVCOM) Ground Vehicle Systems Center (GVSC)
Dr. Stephen Rapp, Thrust Area 5 Co-Leader, Senior Scientist U.S. Army DEVCOM GVSC
Dr. Matt Castanier, Thrust Area 5 Co-Leader, Research Mechanical Engineer, U.S. Army DEVCOM GVSC
Industry Partner:: Andrew Capodieci, Director of Robotics, Neya Systems
Goal: Developing a distributed but collective intelligence (DCI) that enables efficient lifelong learning through hyperdimensional neuromorphic computing.
Title: Learning Neuromorphic Physics-Informed Stochastic Regions of Attraction through Bayesian Optimization (single-PI)
Sponsor: Intel Neuromorphic Research Community (INRC)
Support: $194,616 (Jan. 2022 - Dec. 2024)
Collaborator: Dr. Joe Hays, U.S. Naval Research Laboratory
Goal: The ultimate goal of this project is to enable agents, who are learning to control themselves, to have an inherent
sense of their own stability, and thus enable them to learn and expand their capabilities safely. This work aims to use Bayesian
optimization (BO) to learn a distribution of Lyapunov regions of attraction (ROA). ROA defines the boundary between stability and
instability. Once learned offline, the policy will be able to discriminate which specific parameter set defines the specific system under control from the state/action data and therefore refine the ROA for the specific agent.