Track Objects
Combine object detection with multi-object tracking to follow objects through video sequences, maintaining consistent IDs even through occlusions and fast motion.
What you'll learn:
- Run tracking from the command line with a single command
- Configure detection models and tracking algorithms
- Visualize results with bounding boxes, IDs, and trajectories
- Build custom tracking pipelines in Python
Install
Use the base install for tracking with your own detector. The detection extra adds inference-models for built-in detection.
For more options, see the install guide.
Quickstart
Read frames from video files, webcams, RTSP streams, or image directories. Each frame flows through detection to find objects, then through tracking to assign IDs.
Track objects with one command. Uses RF-DETR Nano and ByteTrack by default.
While trackers focuses on ID assignment, this example uses inference-models for detection and supervision for format conversion to demonstrate end-to-end usage.
import cv2
import supervision as sv
from inference import get_model
from trackers import ByteTrackTracker
model = get_model("rfdetr-nano")
tracker = ByteTrackTracker()
cap = cv2.VideoCapture("source.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
result = model.infer(frame)[0]
detections = sv.Detections.from_inference(result)
detections = tracker.update(detections)
CLI Command Builder
Generate a production-ready trackers track command in seconds. Use interactive controls to tune core settings without memorizing flags.
Trackers
Trackers assign stable IDs to detections across frames, maintaining object identity through motion and occlusion.
Select a tracker with --tracker and tune its behavior with --tracker.* arguments.
Customize the tracker by passing parameters to the constructor, then call update() each frame and reset() between videos.
import cv2
import supervision as sv
from inference import get_model
from trackers import ByteTrackTracker
model = get_model("rfdetr-nano")
tracker = ByteTrackTracker(
lost_track_buffer=60,
minimum_consecutive_frames=5,
)
cap = cv2.VideoCapture("source.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
result = model.infer(frame)[0]
detections = sv.Detections.from_inference(result)
detections = tracker.update(detections)
Detectors
Trackers don't detect objects—they link detections across frames. A detection or segmentation model provides per-frame bounding boxes or masks that the tracker uses to assign and maintain IDs.
Configure detection with --model.* arguments. Filter by confidence and class before tracking.
Trackers are modular—combine any detection library with any tracker. This example uses inference with RF-DETR.
import cv2
import supervision as sv
from inference import get_model
from trackers import ByteTrackTracker
model = get_model("rfdetr-nano")
tracker = ByteTrackTracker()
cap = cv2.VideoCapture("source.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
result = model.infer(frame, confidence=0.3)[0]
detections = sv.Detections.from_inference(result)
detections = tracker.update(detections)
Visualization
Visualization renders tracking results for debugging, demos, and qualitative evaluation.
Enable display and annotation options to see results in real time or in saved video.
Use supervision annotators to draw results on frames before saving or displaying.
import cv2
import supervision as sv
from inference import get_model
from trackers import ByteTrackTracker
model = get_model("rfdetr-nano")
tracker = ByteTrackTracker()
box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()
cap = cv2.VideoCapture("source.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
result = model.infer(frame)[0]
detections = sv.Detections.from_inference(result)
detections = tracker.update(detections)
frame = box_annotator.annotate(frame, detections)
frame = label_annotator.annotate(frame, detections)
Source
trackers accepts video files, webcams, RTSP streams, and directories of images as input sources.
Use opencv-python's VideoCapture to read frames from files, webcams, or streams.
import cv2
import supervision as sv
from inference import get_model
from trackers import ByteTrackTracker
model = get_model("rfdetr-nano")
tracker = ByteTrackTracker()
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
result = model.infer(frame)[0]
detections = sv.Detections.from_inference(result)
detections = tracker.update(detections)
Output
Save tracking results as annotated video files or display them in real time.
Specify an output path to save annotated video.
Use opencv-python's VideoWriter to save annotated frames with full control over codec and frame rate.
import cv2
import supervision as sv
from inference import get_model
from trackers import ByteTrackTracker
model = get_model("rfdetr-nano")
tracker = ByteTrackTracker()
box_annotator = sv.BoxAnnotator()
cap = cv2.VideoCapture("source.mp4")
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = cap.get(cv2.CAP_PROP_FPS)
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
out = cv2.VideoWriter("output.mp4", fourcc, fps, (width, height))
while True:
ret, frame = cap.read()
if not ret:
break
result = model.infer(frame)[0]
detections = sv.Detections.from_inference(result)
detections = tracker.update(detections)
frame = box_annotator.annotate(frame, detections)
out.write(frame)
cap.release()
out.release()
CLI Reference
All arguments accepted by the trackers track command.
| Argument | Description | Default |
|---|---|---|
--source |
Input source. Accepts file paths (.mp4, .avi), device indices (0, 1), stream URLs (rtsp://), or image directories. |
— |
--output |
Path for output video. If a directory is given, saves as output.mp4 inside it. |
none |
--overwrite |
Allow overwriting existing output files. Without this flag, existing files cause an error. | false |
--model |
Model identifier. Pretrained: rfdetr-nano, rfdetr-small, rfdetr-medium, rfdetr-large. Segmentation: rfdetr-seg-*. |
rfdetr-nano |
--model.confidence |
Minimum confidence threshold. Lower values increase recall but may add noise. | 0.5 |
--model.device |
Compute device. Options: auto, cpu, cuda, cuda:0, mps. |
auto |
--model.api_key |
Roboflow API key for custom hosted models. | none |
--classes |
Comma-separated class names or IDs to track. Example: person,car or 0,2. |
all |
--tracker |
Tracking algorithm. Options: bytetrack, sort, ocsort. |
bytetrack |
--tracker.lost_track_buffer |
Frames to retain a track without detections. Higher values improve occlusion handling but risk ID drift. | 30 |
--tracker.track_activation_threshold |
Minimum confidence to start a new track. Lower values catch more objects but increase false positives. | 0.25 |
--tracker.minimum_consecutive_frames |
Consecutive detections required before a track is confirmed. Suppresses spurious detections. | 3 |
--tracker.minimum_iou_threshold |
Minimum IoU overlap to match a detection to an existing track. Higher values require tighter alignment. | 0.3 |
--display |
Opens a live preview window. Press q or ESC to quit. |
false |
--show-boxes |
Draw bounding boxes around tracked objects. | true |
--show-masks |
Draw segmentation masks. Only available with rfdetr-seg-* models. |
false |
--show-confidence |
Show detection confidence scores in labels. | false |
--show-labels |
Show class names in labels. | false |
--show-ids |
Show tracker IDs in labels. | true |
--show-trajectories |
Draw motion trails showing recent positions of each track. | false |