Inference API¶
API reference for inference utilities.
Predictor¶
High-level inference predictor for running detection on images.
- class objdet.inference.predictor.Predictor(model, device='cuda', confidence_threshold=0.25, nms_threshold=0.45)[source]¶
Bases:
objectHigh-level inference predictor for detection models.
Provides a simple interface for loading models and running inference on images or directories.
- Parameters:
model (
BaseLightningDetector) – Detection model instance.confidence_threshold (
float) – Minimum confidence for predictions.nms_threshold (
float) – IoU threshold for NMS.
Example
>>> predictor = Predictor.from_checkpoint("model.ckpt") >>> result = predictor.predict("photo.jpg") >>> print(f"Found {len(result['boxes'])} objects")
- classmethod from_checkpoint(checkpoint_path, model_class=None, device='cuda', **kwargs)[source]¶
Create predictor from a Lightning checkpoint.
- Parameters:
- Return type:
- Returns:
Configured Predictor instance.
- predict(image, return_image=False)[source]¶
Run inference on a single image.
- Parameters:
- Return type:
- Returns:
Prediction dict with boxes, labels, scores. If return_image, returns tuple of (prediction, image_tensor).
Usage Example¶
from objdet.inference import Predictor
# Create from checkpoint
predictor = Predictor.from_checkpoint(
checkpoint_path="model.ckpt",
device="cuda",
confidence_threshold=0.25,
)
# Single image inference
result = predictor.predict("image.jpg")
print(f"Found {len(result['boxes'])} objects")
# Batch inference
results = predictor.predict_batch(["img1.jpg", "img2.jpg"], batch_size=8)
# Directory inference
results_dict = predictor.predict_directory("./images", extensions=(".jpg", ".png"))
SlicedInference (SAHI)¶
Slicing Aided Hyper Inference for detecting small objects in large images.
- class objdet.inference.sahi_wrapper.SlicedInference(predictor, slice_height=640, slice_width=640, overlap_ratio=0.2, merge_method='nms', nms_threshold=0.5, include_full_image=True)[source]¶
Bases:
objectSlicing Aided Hyper Inference for large images.
SAHI splits large images into overlapping tiles, runs detection on each tile, and merges the results using NMS or WBF.
- Parameters:
predictor (
Predictor) – The predictor to use for inference on slices.slice_height (
int) – Height of each slice in pixels.slice_width (
int) – Width of each slice in pixels.overlap_ratio (
float) – Overlap between adjacent slices (0-1).merge_method (
str) – How to merge overlapping predictions (“nms” or “wbf”).nms_threshold (
float) – IoU threshold for merging.include_full_image (
bool) – Whether to also run on full image.
Example
>>> sahi = SlicedInference(predictor, slice_height=640, slice_width=640) >>> results = sahi.predict("aerial_image.jpg")
Usage Example¶
from objdet.inference import Predictor, SlicedInference
# Create base predictor
predictor = Predictor.from_checkpoint("model.ckpt")
# Wrap with sliced inference
sahi = SlicedInference(
predictor=predictor,
slice_height=640,
slice_width=640,
overlap_ratio=0.2,
merge_method="nms", # or "wbf"
include_full_image=True,
)
# Run sliced inference on large image
result = sahi.predict("large_satellite_image.jpg")
Parameters:
predictor: Base Predictor instance for running inference on slicesslice_height(int): Height of each slice (default: 640)slice_width(int): Width of each slice (default: 640)overlap_ratio(float): Overlap between adjacent slices (default: 0.2)merge_method(str): Method for merging overlapping predictions - “nms” or “wbf” (default: “nms”)nms_threshold(float): IoU threshold for NMS merging (default: 0.5)include_full_image(bool): Also run inference on full image (default: True)