Training and Validation

YOLO object Recognition

 

Accuracy (AP) and speed (FPS) of multiple neural networks for detecting objects available in the whitepaper on YOLO v4. YOLO v4 aims to track the Pareto optimality curve (AP/FPS). This is the primary reason why YOLO was chosen as the engine for this project, as eventually the hope is that this could provide real-time data.

Training YOLO

Training images for this model were sourced from the COCO Dataset. Two separate models were created, one for an interior office environment and one for an exterior pedestrian-centered environment.

Average Loss vs Iterations of this custom configuration of YOLO.

Data Preprocessing

The raw information from the camera is processed using MatLab into .csv data that other platforms can utilize.

Spatial Data

Rhinoceros serves as the platform for our spatial data and grasshopper and python as the engine to process the data from MatLab.

Visualization

After processing, the data can then be interpreted by Tableau. 

This project was built using Darknet, a creation of J Redmon accessible here

The original repository can be found here.

YOLO is a real-time neural network that is faster with similar accuracy to networks like Focal Loss, Google Tensorflow EfficientDET and Facebook Pytorch/Detectron.

 

You can find the whitepaper on YOLO v3 here.

 

The full network structure of YOLO v4 can be viewed here.

This network was trained using pre-labeled images from the COCO (Common Objects in COntext) database which can be found here.