The Aegis System Software Flowchart

The Aegis System Software Flowchart

How Aegis Works

The Aegis System enhances campus’ existing CCTV security camera infrastructure to enable real-time weapon detection. Aegis is designed to be automatic, secure, and capable scaling to thousands of cameras while keeping costs low.

In the background, the system is constantly processing live camera footage via machine-learning; when the model detects a weapon (gun, knife, etc) on screen, it uploads the footage onto Cloud Storage.

Once footage is uploaded, a serverless function sends a SMS notification to users’ mobile devices via the Twilio API—the message contains a link to the React web app. Once the user signs into the application, they can view the footage.

If there’s a credible threat, the user notifies authorities through the app—sending another Twilio message directly to the police.

Frontend

The frontend is a single-page web application built using React (create-react-app); the app is hosted on GCP. Users sign-in via Firebase Authentication. The app is built using the component library, Chakra UI—this gives the site a consistent feel.

Backend

The backend is serverless—it’s a microservice architecture. Firebase, Google’s backend-as-a-service is utilized for basically everything. Firebase provides the services for hosting the frontend and storing the footage. Firebase cloud functions provide HTTP end-points for connecting the various services, while the Twilio API provides the SMS messages for notifying users.

ML-Model

A Haar Cascade classifier processes live-footage and detect weapons. For the MVP, the computer vision library, OpenCV, provides the tools necessary for training/testing the model and generating camera footage. For the first iterations, an open source model for detecting images of guns were utilized. Since then, a model based on live, security footage is currently being trained.