Enhancing Aircraft Surveillance: A YOLOv8 Tracking and Counting System

Daniel García
2 min readSep 9, 2023

--

In an era of rapid technological advancements, the need for efficient and accurate surveillance of our skies has never been more crucial. Whether it’s for managing airspace, bolstering airport security, or monitoring environmental impacts, the ability to detect and count aircraft in real-time plays a pivotal role. Traditional methods have often fallen short in the face of these challenges, but now, the intersection of deep learning and cutting-edge computer vision techniques brings us an extraordinary solution. In this article, we delve into the realm of YOLOv8 (You Only Look Once version 8), an object detection system renowned for its speed and precision. We will explore how YOLOv8 can be harnessed to not only detect planes in video feeds but also assign unique track IDs to ensure each aircraft is counted only once, revolutionizing our ability to monitor and manage the skies above. Our data source for this project is the ADS-B Exchange, which provides a valuable repository of aircraft tracking information.

Getting started with Yolo-V8

#Import libraries
from ultralytics import YOLO
import cv2
import argparse
import numpy as np

To get started with tracking objects using YOLOv8, you can create a simple Python script that applies the tracking algorithm on a video and displays the output in a default OpenCV window.

#Load and save video
cap = cv2.VideoCapture(args.source)

# Check if the video file was opened successfully
if not cap.isOpened():
print("Error: Could not open video file.")
return

# Get video properties
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Define the codec and create a VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*"mp4v") # Use appropriate codec (e.g., 'XVID', 'MJPG', 'H264', etc.)
out = cv2.VideoWriter(args.save_dir, fourcc, fps, (width, height))

# Read a frame from the video. This should be inside a loop
ret, frame = cap.read()

# Break the loop if we have reached the end of the video
if not ret:
break
##Data processing


out.write(frame)

#Release input and output
cap.release()
out.release()

YoloV8 has a built-in function to run tracking without using external libraries.

model = YOLO('path/to/model') #model creation
results = model.track(frame,persist=True)[0] #Use persist=True to keep track id

In this case, each result is drawn in the original frame.

boxes = results.cpu().numpy().boxes.data
for output in boxes:
bbox_tl_x = int(output[0])
bbox_tl_y = int(output[1])
bbox_br_x = int(output[2])
bbox_br_y = int(output[3])

id = int(output[4])
class_ = int(output[6])
score = int(output[5])
classColor = [int(c) for c in colorList[class_]]


cv2.rectangle(frame, (bbox_tl_x, bbox_tl_y),(bbox_br_x, bbox_br_y), color=classColor, thickness=2) #Draw detection rectangle
cv2.putText(frame, f"{class_names[class_].capitalize()}- {id}", (bbox_tl_x, bbox_tl_y), cv2.FONT_HERSHEY_COMPLEX, 1, color=classColor, thickness=1) #Draw detection value

Check the full yolov8-object-tracking code in Github.

If you liked this post, you can support me here ☕😃.

Facing AI challenges? Contact me 👉here or here👈 for results-driven solutions🤓.

You can also find me on Github, Medium and Linkedin.

--

--

Daniel García
Daniel García

Written by Daniel García

Lifetime failure - I write as I learn 🤖

No responses yet