Parsa Research Laboratory

Research Areas and Interests

At Parsa Research Laboratory (PRL) we are developing full stack novel brain-inspired paradigms for accurate, energy efficient, fast, and reliable intelligence at the edge. These span from algorithm-hardware codesign to physics-informed neuromorphic computing, distributed learning, and safe lifelong learning.

PRL Research Topics
Sponsored Projects

Title: Multi-Phase Vector Symbolic Architectures for Distributed and Collective Intelligence in Multi-Agent Autonomous Systems (single-PI)
Sponsor: U.S. Army Ground Vehicle Systems Center (GVSC), Automative Research Center (ARC)
Quad Members:
Dr. David Gorsich, Chief Scientist, U.S. Army Combat Capabilities Development Center (DEVCOM) Ground Vehicle Systems Center (GVSC)
Dr. Stephen Rapp, Thrust Area 5 Co-Leader, Senior Scientist U.S. Army DEVCOM GVSC
Dr. Matt Castanier, Thrust Area 5 Co-Leader, Research Mechanical Engineer, U.S. Army DEVCOM GVSC
Goal: Developing a distributed but collective intelligence (DCI) comprised of swarms of agents with different levels of intelligence. This DCI enables efficient lifelong learning through hyperdimensional neuromorphic computing.

Title: Learning Neuromorphic Physics-Informed Stochastic Regions of Attraction through Bayesian Optimization (single-PI)
Sponsor: Intel Neuromorphic Research Community (INRC)
Collaborator: Dr. Joe Hays, Naval Research Laboratory
Goal: The ultimate goal of this project is to enable agents, who are learning to control themselves, to have an inherent sense of their own stability, and thus enable them to learn and expand their capabilities safely. This work aims to use Bayesian optimization (BO) to learn a distribution of Lyapunov regions of attraction (ROA). ROA defines the boundary between stability and instability. Once learned offline, the policy will be able to discriminate which specific parameter set defines the specific system under control from the state/action data and therefore refine the ROA for the specific agent.
Impact: Endowing agents with a sense of their own stability through an estimate of their own ROA (pulled from an offline characterized stochastic ROA distribution) is a strong step forward in our community’s pursuit of having safe lifelong motor learning. The ROA will enable the agent to maximize the exploration needed for learning while remaining safe.