YOLO object Recognition
Accuracy (AP) and speed (FPS) of multiple neural networks for detecting objects available in the whitepaper on YOLO v4. YOLO v4 aims to track the Pareto optimality curve (AP/FPS). This is the primary reason why YOLO was chosen as the engine for this project, as eventually the hope is that this could provide real-time data.

Training YOLO
Training images for this model were sourced from the COCO Dataset. Two separate models were created, one for an interior office environment and one for an exterior pedestrian-centered environment.

Average Loss vs Iterations of this custom configuration of YOLO.
Data Preprocessing
The raw information from the camera is processed using MatLab into .csv data that other platforms can utilize.

Spatial Data
Rhinoceros serves as the platform for our spatial data and grasshopper and python as the engine to process the data from MatLab.

Visualization
After processing, the data can then be interpreted by Tableau.




This network was trained using pre-labeled images from the COCO (Common Objects in COntext) database which can be found here.