Fair cross-framework evaluation
Compare outputs from Ultralytics, RT-DETR, Detectron2, MMDetection, or custom inference through the same prediction artifact path.
Bring your own inference. Export once. Evaluate fairly.
YOLOZU evaluates vision model outputs through a stable predictions.json interface contract instead of tying reports to one training or inference framework. It is built for teams comparing PyTorch, ONNX Runtime, TensorRT, and custom inference backends without silent metric drift.
Use cases
YOLOZU is the evaluation lane for teams that already have, or can export, prediction artifacts and need comparable reports across model families or runtime backends.
Compare outputs from Ultralytics, RT-DETR, Detectron2, MMDetection, or custom inference through the same prediction artifact path.
Qualify PyTorch, ONNX Runtime, TensorRT, TorchScript, and adapter targets while labeling real, placeholder, skipped, and dry-run support clearly.
Keep dataset split, metric protocol, export settings, and artifact paths explicit enough for review and follow-up debugging.
Use yolozu guide and yolozu doctor --explain when a JSON-only diagnostic is too dense for a first run.
Quickstart
The shortest path checks the environment, runs an instance-segmentation demo, and writes both a JSON report and PNG overlays.
python3 -m pip install -U yolozu
yolozu doctor --explain
yolozu demo instance-seg --run-dir reports/quickstart_instance_seg --progress
reports/quickstart_instance_seg/instance_seg_demo_report.jsonreports/quickstart_instance_seg/overlays/*.pngyolozu guide or yolozu guide --goal evaluateEvaluation flow
Resources
These are the canonical public surfaces for installation, source review, archived releases, and the manual.
Review the Apache-2.0 repository, file issues, and inspect the current README and documentation map.
Install the current Python package from PyPI and use the CLI entrypoint yolozu.
Use the archived software and manual records when you need a stable citation trail.
Start with the predictions schema when wiring a new model, runtime, or export adapter into the evaluation lane.
License
YOLOZU is published under Apache-2.0. The primary lane is framework-agnostic prediction validation and evaluation; optional adapters and research paths are labeled separately so production users can choose the stable surface first.