sahi

Framework agnostic sliced/tiled inference + interactive ui + error analysis plots

MIT License

Downloads
160.6K
Stars
3.6K
Committers
38

Bot releases are hidden (Show)

sahi - v0.5.1

Published by fcakyon over 3 years ago

  • add predict_fiftyone script to perform sliced/standard inference over yolov5/mmdetection models and visualize the incorrect prediction over the fiftyone ui.

sahi_fiftyone

  • fix mot utils (#152)
sahi - v0.5.0

Published by fcakyon over 3 years ago

  • add check for image size in slice_image (#147)
  • refactor prediction output (#148)
  • fix slice_image in readme (#149)

refactor prediction output

# perform standard or sliced prediction
result = get_prediction(image, detection_model)
result = get_sliced_prediction(image, detection_model)

# export prediction visuals to "demo_data/"
result.export_visuals(export_dir="demo_data/")

# convert predictions to coco annotations
result.to_coco_annotations()

# convert predictions to coco predictions
result.to_coco_predictions(image_id=1)

# convert predictions to [imantics](https://github.com/jsbroks/imantics) annotation format
result.to_imantics_annotations()

# convert predictions to [fiftyone](https://github.com/voxel51/fiftyone) detection format
result.to_fiftyone_detections()
  • Check more in colab notebooks:

YOLOv5 + SAHI demo:

MMDetection + SAHI demo:

sahi - v0.4.8

Published by fcakyon over 3 years ago

  • update mot utils (#143)
  • add fiftyone utils (#144)

FiftyOne Utilities

from sahi.utils.fiftyone import launch_fiftyone_app
# launch fiftyone app:
session = launch_fiftyone_app(coco_image_dir, coco_json_path)
# close fiftyone app:
session.close()
from sahi import get_sliced_prediction
# perform sliced prediction
result = get_sliced_prediction(
    image,
    detection_model,
    slice_height = 256,
    slice_width = 256,
    overlap_height_ratio = 0.2,
    overlap_width_ratio = 0.2
)
# convert first object into fiftyone detection format
object_prediction = result["object_prediction_list"][0]
fiftyone_detection = object_prediction.to_fiftyone_detection(image_height=720, image_width=1280)
sahi - v0.4.6

Published by fcakyon over 3 years ago

new feature

  • add more mot utils (#133)
  • import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
  • init video:
mot_video = MotVideo(name="sequence_name")
  • init first frame:
mot_frame = MotFrame()
  • add annotations to frame:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)
  • add frame to video:
mot_video.add_frame(mot_frame)
  • export in MOT challenge format:
mot_video.export(export_dir="mot_gt", type="gt")
  • your MOT challenge formatted ground truth files are ready under mot_gt/sequence_name/ folder.
  • you can customize tracker while initializing mot video object:
tracker_params = {
  'max_distance_between_points': 30,
  'min_detection_threshold': 0,
  'hit_inertia_min': 10,
  'hit_inertia_max': 12,
  'point_transience': 4,
}
# for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)
  • you can omit automatic track id generation and directly provide track ids of annotations:
# create annotations with track ids:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)

# add frame to video:
mot_video.add_frame(mot_frame)

# export in MOT challenge format without automatic track id generation:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=False)
  • you can overwrite the results into already present directory by adding exist_ok=True:
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)
  • import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
  • init video by providing video name:
mot_video = MotVideo(name="sequence_name")
  • init first frame:
mot_frame = MotFrame()
  • add tracker outputs to frame:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)
  • add frame to video:
mot_video.add_frame(mot_frame)
  • export in MOT challenge format:
mot_video.export(export_dir="mot_test", type="test")
  • your MOT challenge formatted ground truth files are ready as mot_test/sequence_name.txt.
  • you can enable tracker and directly provide object detector output:
# add object detector outputs:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

# add frame to video:
mot_video.add_frame(mot_frame)

# export in MOT challenge format by applying a kalman based tracker:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=True)
  • you can customize tracker while initializing mot video object:
tracker_params = {
  'max_distance_between_points': 30,
  'min_detection_threshold': 0,
  'hit_inertia_min': 10,
  'hit_inertia_max': 12,
  'point_transience': 4,
}
# for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)
  • you can overwrite the results into already present directory by adding exist_ok=True:
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)

documentation

  • update coco docs (#134)
  • add colab links into readme (#135)

Check YOLOv5 + SAHI demo:

Check MMDetection + SAHI demo:

bug fixes

  • fix demo notebooks (#136)
sahi - v0.4.5

Published by fcakyon over 3 years ago

enhancement

  • add colab demo support (#127)
  • add warning for image files without suffix (#129)
  • seperate mmdet/yolov5 utils (#130)
sahi - v0.4.4

Published by fcakyon over 3 years ago

new feature

documentation

  • update installation (#118)
  • add details for coco2yolov5 usage (#120)

bug fixes

  • fix typo (#117)
  • update coco2yolov5.py (#115)

breaking changes

  • drop python 3.6 support (#123)
sahi - v0.4.3

Published by fcakyon over 3 years ago

refactorize postprocess (#109)

  • specify postprocess type as --postprocess_type UNIONMERGE or --postprocess_type NMS to be applied over sliced predictions
  • specify postprocess match metric as --match_metric IOS for intersection over smaller area or --match_metric IOU for intersection over union
  • specify postprocess match threshold as --match_thresh 0.5
  • add --class_agnostic argument to ignore category ids of the predictions during postprocess (merging/nms)

export visuals with gt (#107)

  • export visuals with predicted + gt annotations into visuals_with_gt folder when coco_file_path is provided
  • keep source folder structure when exporting results
  • add from_coco_annotation_dict classmethod to ObjectAnnotation
  • remove unused imports/classes/parameters
  • better typing hints
sahi - v0.4.2

Published by fcakyon over 3 years ago

CLI usage:

python scripts/predict.py --model_type yolov5 --source image/file/or/folder --model_path path/to/model
sahi - v0.3.19

Published by fcakyon over 3 years ago

  • refactorize slicing, faster coco indexing (#95)
sahi - v0.3.18

Published by fcakyon over 3 years ago

  • utilize ignore_negative_samples property (#90):

Filter out images that does not contain any annotation

from sahi.utils.coco import Coco
# set ignore_negative_samples as False if you want images without annotations present in json and yolov5 exports
coco = Coco.from_coco_dict_or_path("coco.json", ignore_negative_samples=True)
  • fix typo in get_area_filtered_coco (#89)
sahi - v0.3.17

Published by fcakyon over 3 years ago

  • improve .stats (#85):
from sahi.utils.coco import Coco

# init Coco object
coco = Coco.from_coco_dict_or_path("coco.json")

# get dataset stats
coco.stats
{
  'num_images': 6471,
  'num_annotations': 343204,
  'num_categories': 2,
  'num_negative_images': 0,
  'num_images_per_category': {'human': 5684, 'vehicle': 6323},
  'num_annotations_per_category': {'human': 106396, 'vehicle': 236808},
  'min_num_annotations_in_image': 1,
  'max_num_annotations_in_image': 902,
  'avg_num_annotations_in_image': 53.037243084530985,
  'min_annotation_area': 3,
  'max_annotation_area': 328640,
  'avg_annotation_area': 2448.405738278109,
  'min_annotation_area_per_category': {'human': 3, 'vehicle': 3},
  'max_annotation_area_per_category': {'human': 72670, 'vehicle': 328640},
}

  • add category based annotation area filtering (#86):
# filter out images with seperate area intervals per category
intervals_per_category = {
  "human": {"min": 20, "max": 10000},
  "vehicle": {"min": 50, "max": 15000},
}
area_filtered_coco = coco.get_area_filtered_coco(intervals_per_category=intervals_per_category)
sahi - v0.3.15

Published by fcakyon over 3 years ago

  • add get_area_filtered_coco method to Coco class (#75):
from sahi.utils.coco import Coco
from sahi.utils.file import save_json

# init Coco objects by specifying coco dataset paths and image folder directories
coco = Coco.from_coco_dict_or_path("coco.json")

# filter out images that contain annotations with smaller area than 50
area_filtered_coco = coco.get_area_filtered_coco(min=50)

# filter out images that contain annotations with smaller area than 50 and larger area than 10000
area_filtered_coco = coco.get_area_filtered_coco(min=50, max=10000)

# export filtered COCO dataset
save_json(area_filtered_coco.json, "area_filtered_coco.json")
  • faster yolov5 conversion with mp argument (#80):
# multiprocess support
if __name__ == __main__:
  coco = Coco.from_coco_dict_or_path(
    "coco.json",
    image_dir="coco_images/"
    mp=True
  )
  coco.export_as_yolov5(
    output_dir="output/folder/dir",
    train_split_rate=0.85,
    mp=True
  )
  • update torch and mmdet versions in workflows (#79)
  • remove optional dependencies from conda (#78)
sahi - v0.3.14

Published by fcakyon over 3 years ago

  • add stats property for Coco class (#70)
from sahi.utils.coco import Coco

# init Coco object
coco = Coco.from_coco_dict_or_path("coco.json")

# get dataset stats
coco.stats
{
    'avg_annotation_area': 2448.405738278109,
    'avg_num_annotations_in_image': 53.037243084530985,
    'max_annotation_area': 328640,
    'max_num_annotations_in_image': 902,
    'min_annotation_area': 3,
    'min_num_annotations_in_image': 1,
    'num_annotations': 343204,
    'num_annotations_per_category': {
        'human': 106396,
        'vehicle': 236808
    },
    'num_categories': 2,
    'num_images': 6471,
    'num_images_per_category': {
        'human': 5684,
        'vehicle': 6323
    }
}

sahi - v0.3.12

Published by fcakyon over 3 years ago

  • improve coco to yolov5 conversion (#68)
sahi - v0.3.11

Published by fcakyon over 3 years ago

  • fix coco subsampling and category updating (#64)
  • increase test coverage (#64)
sahi - v0.3.10

Published by fcakyon over 3 years ago

  • fix yolo export (#62)
sahi - v0.3.9

Published by fcakyon over 3 years ago

  • faster Coco merging and category updating (#59)
sahi - v0.3.8

Published by fcakyon over 3 years ago

  • increase coco split and yolo export speeds (#58)
  • reduce json export size (#57)
  • make multiprocess call optional (#55)
sahi - v0.3.7

Published by fcakyon over 3 years ago

  • faster coco dataset indexing (#53)
  • category remapping feature for coco datasets (#53)
  • fix export_as_yolov5 paths (#52)
sahi - v0.3.6

Published by fcakyon over 3 years ago

  • make some dependencies optional
  • fix some coco utils
Package Rankings
Top 25.44% on Conda-forge.org
Top 1.59% on Pypi.org
Related Projects