Geometric Computer Vision Library for Spatial AI
APACHE-2.0 License
Bot releases are visible (Hide)
viewcode
to github by @johnnv1 in https://github.com/kornia/kornia/pull/2727
extract_tensor_patches
to work with partial patches cases by @johnnv1 in https://github.com/kornia/kornia/pull/2735
2.1.2
by @johnnv1 in https://github.com/kornia/kornia/pull/2742
test
-> tests
by @johnnv1 in https://github.com/kornia/kornia/pull/2743
kornia.testing
by @johnnv1 in https://github.com/kornia/kornia/pull/2745
torch==2.2.0
by @johnnv1 in https://github.com/kornia/kornia/pull/2772
benchmarks/
by @johnnv1 in https://github.com/kornia/kornia/pull/2777
ruff 0.2.1
config by @johnnv1 in https://github.com/kornia/kornia/pull/2784
AugmentationSequential
by @johnnv1 in https://github.com/kornia/kornia/pull/2740
RandomLinearIllumination
and `RandomLinearCornerIll⦠by @johnnv1 in https://github.com/kornia/kornia/pull/2827
Full Changelog: https://github.com/kornia/kornia/commits/v0.7.2
Published by edgarriba 10 months ago
sphinx==7.0.1
by @johnnv1 in https://github.com/kornia/kornia/pull/2518
grid_sample
from F without importing it. by @antoinebrl in https://github.com/kornia/kornia/pull/2532
importlib.util
import by @Avasam in https://github.com/kornia/kornia/pull/2558
torchvision
from docs deps by @johnnv1 in https://github.com/kornia/kornia/pull/2591
torch.jit.script
support for warp_affine
by @balbok0 in https://github.com/kornia/kornia/pull/2588
cv2
as dev dependency by @johnnv1 in https://github.com/kornia/kornia/pull/2593
average_endpoint_error
metric by @johnnv1 in https://github.com/kornia/kornia/pull/2620
scipy
from dev dependencies by @johnnv1 in https://github.com/kornia/kornia/pull/2634
depth_to_3d
by @johnnv1 in https://github.com/kornia/kornia/pull/2636
focal.py
by @omerferhatt in https://github.com/kornia/kornia/pull/2654
kornia.nerf
improvements by @edgarriba in https://github.com/kornia/kornia/pull/2661
x
requirements by @johnnv1 in https://github.com/kornia/kornia/pull/2670
torch.ones
not needed for kernel_values by @kunaltyagi in https://github.com/kornia/kornia/pull/2678
Full Changelog: https://github.com/kornia/kornia/compare/v0.7.0...v0.7.1
Published by edgarriba about 1 year ago
In this release we have added a new Image
API as placeholder to support a more generic multibackend api. You can export/import from files, numpy and dlapck.
>>> # from a torch.tensor
>>> data = torch.randint(0, 255, (3, 4, 5), dtype=torch.uint8) # CxHxW
>>> pixel_format = PixelFormat(
... color_space=ColorSpace.RGB,
... bit_depth=8,
... )
>>> layout = ImageLayout(
... image_size=ImageSize(4, 5),
... channels=3,
... channels_order=ChannelsOrder.CHANNELS_FIRST,
... )
>>> img = Image(data, pixel_format, layout)
>>> assert img.channels == 3
We have added the ObjectDetector
that includes by default the RT-DETR model. The detection pipeline is fully configurable by supplying a pre-processor, a model, and a post-processor. Example usage is shown below.
from io import BytesIO
import cv2
import numpy as np
import requests
import torch
from PIL import Image
import matplotlib.pyplot as plt
from kornia.contrib.models.rt_detr import RTDETR, DETRPostProcessor, RTDETRConfig
from kornia.contrib.object_detection import ObjectDetector, ResizePreProcessor
model_type = "hgnetv2_x" # also available: resnet18d, resnet34d, resnet50d, resnet101d, hgnetv2_l
checkpoint = f"https://github.com/kornia/kornia/releases/download/v0.7.0/rtdetr_{model_type}.ckpt"
config = RTDETRConfig(model_type, 80, checkpoint=checkpoint)
model = RTDETR.from_config(config).eval()
detector = ObjectDetector(model, ResizePreProcessor(640), DETRPostProcessor(0.3))
url = "https://github.com/kornia/data/raw/main/soccer.jpg"
img = Image.open(BytesIO(requests.get(url).content))
img = np.asarray(img, dtype=np.float32) / 255
img_pt = torch.from_numpy(img).permute(2, 0, 1)
detection = detector.predict([img_pt])
for cls_score_xywh in detection[0].numpy():
class_id = int(cls_score_xywh[0])
score = cls_score_xywh[1]
x, y, w, h = cls_score_xywh[2:].round().astype(int)
cv2.rectangle(img, (x, y, w, h), (255, 0, 0), 3)
text = f"{class_id}, {score:.2f}"
font = cv2.FONT_HERSHEY_SIMPLEX
(text_width, text_height), _ = cv2.getTextSize(text, font, 1, 2)
cv2.rectangle(img, (x, y - text_height, text_width, text_height), (255, 0, 0), cv2.FILLED)
cv2.putText(img, text, (x, y), font, 1, (255, 255, 255), 2)
plt.imshow(img)
plt.show()
As part of the kornia.contrib
module, we started building a models
module where Deep Learning models for Computer Vision (Semantic Segmentation, Object Detection, etc.) will exist.
From an abstract base class ModelBase
, we will implement and make available these deep learning models (eg Segment anything). Similarly, we provide standard structures to be used with the results of these models such as SegmentationResults
.
The idea is that we can abstract and standardize how these models will behave with our High level APIs. Like for example interacting with the Visual Prompter
backend (today Segment Anything
is available).
ModelBase
provides methods for loading checkpoints (load_checkpoint
), and compiling itself via the torch.compile
API. And we plan to increase it according to the needs of the community.
Within this release, we are also making other models available to be used like RT_DETR
and tiny_vit
.
Example of using these abstractions to implement a model:
# Each model should be a submodule inside the `kornia.contrib.models`, and the Model class itself will be exposed under this
# `models` module.
from kornia.contrib.models.base import ModelBase
from dataclasses import dataclass
from kornia.contrib.models.structures import SegmentationResults
from enum import Enum
class MyModelType(Enum):
"""Map the model types."""
a = 0
...
@dataclass
class MyModelConfig:
model_type: str | int | SamModelType | None = None
checkpoint: str | None = None
...
class MyModel(ModelBase[MyModelConfig]):
def __init__(...) -> None:
...
@staticmethod
def from_config(config: MyModelConfig) -> MyModel:
"""Build the model based on the config"""
...
def forward(...) -> SegmentationResults:
...
In most object detection models, non-maximum suppression (NMS) is necessary to remove overlapping and similar bounding boxes. This post-processing algorithm has high latency, preventing object detectors from reaching real-time speed. DETR is a new class of detectors that eliminate NMS step by using transformer decoder to directly predict bounding boxes. RT-DETR enhances Deformable DETR to achieve real-time speed on server-class GPUs by using an efficient backbone. More details can be seen here
TinyViT is an efficient and high-performing transformer model for images. It achieves a top-1 accuracy of 84.8% on ImageNet-1k with only 21M parameters. See TinyViT for more information.
MobileSAM replaces the heavy ViT-H backbone in the original SAM with TinyViT, which is more than 100 times smaller in terms of parameters and around 40 times faster in terms of inference speed. See MobileSAM for more details.
To use MobileSAM, simply specify "mobile_sam"
in the SamConfig
:
from kornia.contrib.visual_prompter import VisualPrompter
from kornia.contrib.models.sam import SamConfig
prompter = VisualPrompter(SamConfig("mobile_sam", pretrained=True))
Added the LightGlue
LightGlue-based matcher in kornia API. This is based on the original code from paper βLightGlue: Local Feature Matching at Light Speedβ. See [LSP23] for more details.
The LightGlue algorithm won a money prize in the Image Matching Challenge 2023 @ CVPR23: https://www.kaggle.com/competitions/image-matching-challenge-2023/overview
See a working example integrating with COLMAP: https://github.com/kornia/kornia/discussions/2469
New kornia.sensors
module to interface with sensors like Camera, IMU, GNSS etc.
We added CameraModel
, PinholeModel
, CameraModelBase
for now.
Usage example:
Define a CameraModel
>>> # Pinhole Camera Model
>>> cam = CameraModel(ImageSize(480, 640), CameraModelType.PINHOLE, torch.Tensor([328., 328., 320., 240.]))
>>> # Brown Conrady Camera Model
>>> cam = CameraModel(ImageSize(480, 640), CameraModelType.BROWN_CONRADY, torch.Tensor([1.0, 1.0, 1.0, 1.0,
... 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]))
>>> # Kannala Brandt K3 Camera Model
>>> cam = CameraModel(ImageSize(480, 640), CameraModelType.KANNALA_BRANDT_K3, torch.Tensor([1.0, 1.0, 1.0,
... 1.0, 1.0, 1.0, 1.0, 1.0]))
>>> # Orthographic Camera Model
>>> cam = CameraModel(ImageSize(480, 640), CameraModelType.ORTHOGRAPHIC, torch.Tensor([328., 328., 320., 240.]))
>>> cam.params
tensor([328., 328., 320., 240.])
kornia.geometry.solvers
submoduleNew module for geometric vision solvers that include the following:
This is part of an upgrade of the find_fundamental
to support the 7POINT
algorithm.
Added kornia.utils.print_image
API for printing any given image tensors or image path to terminal.
>>> kornia.utils.print_image("panda.jpg")
TestColorJiggleGen
by @johnnv1 in https://github.com/kornia/kornia/pull/2341
geometry.conversions
by @johnnv1 in https://github.com/kornia/kornia/pull/2357
kornia/tutorials
repo by @johnnv1 in https://github.com/kornia/kornia/pull/2366
setup-python@v4
on env setup CI by @johnnv1 in https://github.com/kornia/kornia/pull/2380
geometry.conversions
by @johnnv1 in https://github.com/kornia/kornia/pull/2424
kornia.geometry.conversion
by @pri1311 in https://github.com/kornia/kornia/pull/2437
alpha
of focal loss by @qingpeng9802 in https://github.com/kornia/kornia/pull/2393
disallow_untyped_defs
on mypy by @johnnv1 in https://github.com/kornia/kornia/pull/2252
solvers
Submodule by @anandhupvr in https://github.com/kornia/kornia/pull/2465
kornia.sensors
docs update by @cjpurackal in https://github.com/kornia/kornia/pull/2477
KORNIA_CHECK_SAME_DEVICE
cuda test by @johnnv1 in https://github.com/kornia/kornia/pull/2479
from_matrix
for Se3 and Se2 by @cjpurackal in https://github.com/kornia/kornia/pull/2473
PinholeModel
camera model by @edgarriba in https://github.com/kornia/kornia/pull/2492
Full Changelog: https://github.com/kornia/kornia/compare/v0.6.12...v0.7.0
Published by edgarriba over 1 year ago
In this release we have added a new ImagePrompter
API that settles the basis as a foundational api for the task to query geometric information to images inspired by LLM. We leverage the ImagePrompter API via the Segment Anything (SAM) making the model more accessible, packaged and well maintained for industry standards.
Check the full tutorial: https://github.com/kornia/tutorials/blob/master/nbs/image_prompter.ipynb
import kornia as K
from kornia.contrib.image_prompter import ImagePrompter
from kornia.geometry.keypoints import Keypoints
from kornia.geometry.boxes import Boxes
image: Tensor = K.io.load_image("soccer.jpg", ImageLoadType.RGB32, "cuda")
# Load the prompter
prompter = ImagePrompter(config, device="cuda")
# set the image: This will preprocess the image and already generate the embeddings of it
prompter.set_image(image)
# Generate the prompts
keypoints = Keypoints(torch.tensor([[[500, 375]]], device="cuda")) # BxNx2
# For the keypoints label: 1 indicates a foreground point; 0 indicates a background point
keypoints_labels = torch.tensor([[1]], device="cuda") # BxN
boxes = Boxes(
torch.tensor([[[[425, 600], [425, 875], [700, 600], [700, 875]]]], device="cuda"), mode='xyxy'
)
# Runs the prediction with all prompts
prediction = prompter.predict(
keypoints=keypoints,
keypoints_labels=keypoints_labels,
boxes=boxes,
multimask_output=True,
)
Blur images by preserving edges via Bilateral and Guided Blurring
-> https://kornia.readthedocs.io/en/latest/filters.html#kornia.filters.guided_blur
ImageLoadType
by @edgarriba in https://github.com/kornia/kornia/pull/2309
kornia/data
by @johnnv1 in https://github.com/kornia/kornia/pull/2319
Full Changelog: https://github.com/kornia/kornia/compare/v0.6.11...v0.6.12
Published by ducha-aiki over 1 year ago
In this release we have added DISK, which is the best free local feature for 3D reconstruction. (part of winning solutions in IMC2021 together with SuperGlue).
Thanks to @jatentaki for the great work and relicensing the DISK to Apache 2!
import kornia.feature as KF
disk = KF.DISK.from_pretrained('depth').to(device)
with torch.inference_mode():
inp = torch.cat([img1, img2], dim=0)
features1, features2 = disk(inp, 2048,
pad_if_not_divisible=True)
kps1, descs1 = features1.keypoints, features1.descriptors
kps2, descs2 = features2.keypoints, features2.descriptors
dists, idxs = KF.match_smnn(descs1, descs2, 0.98)
core.check
, Boxes
, and some others by @johnnv1 in https://github.com/kornia/kornia/pull/2219
disallow_incomplete_defs
on mypy by @johnnv1 in https://github.com/kornia/kornia/pull/2094
assert_close()
by @gau-nernst in https://github.com/kornia/kornia/pull/2233
geometry.subpix
by @johnnv1 in https://github.com/kornia/kornia/pull/2253
Full Changelog: https://github.com/kornia/kornia/compare/v0.6.10...v0.6.11
Published by edgarriba over 1 year ago
depth_from_disparity
function by @pri1311 in https://github.com/kornia/kornia/pull/2096
PadTo
to docs by @johnnv1 in https://github.com/kornia/kornia/pull/2122
apply_ColorMap
for integer tensor by @johnnv1 in https://github.com/kornia/kornia/pull/1996
CenterCrop
docs example by @johnnv1 in https://github.com/kornia/kornia/pull/2124
setup.py
by @johnnv1 in https://github.com/kornia/kornia/pull/2137
upscale_double
by @vicsyl in https://github.com/kornia/kornia/pull/2105
nightly
labeled condition by @johnnv1 in https://github.com/kornia/kornia/pull/2140
TestUpscaleDouble
by @johnnv1 in https://github.com/kornia/kornia/pull/2147
fail-fast:false
as default on tests workflow by @johnnv1 in https://github.com/kornia/kornia/pull/2146
depth_from_disparity
to docs by @pri1311 in https://github.com/kornia/kornia/pull/2150
LongestMaxSize
and SmallestMaxSize
by @johnnv1 in https://github.com/kornia/kornia/pull/2131
sphinx-autodoc-typehints==1.21.3
by @johnnv1 in https://github.com/kornia/kornia/pull/2159
TestSSIM3d
, and BaseTester.gradcheck
by @johnnv1 in https://github.com/kornia/kornia/pull/2152
sphinx-autodoc-typehints
by @johnnv1 in https://github.com/kornia/kornia/pull/2166
boxes
, MultiResolutionDetector
. apply colormap
, AugmentationSequential
by @johnnv1 in https://github.com/kornia/kornia/pull/2167
BaseTester
by @johnnv1 in https://github.com/kornia/kornia/pull/2120
x
tests for torch=1.12.1
and accelerate
not available by @johnnv1 in https://github.com/kornia/kornia/pull/2178
filters
module: Dropping JIT support by @johnnv1 in https://github.com/kornia/kornia/pull/2187
integral_image
and integral_tensor
by @AnimeshMaheshwari22 in https://github.com/kornia/kornia/pull/1779
assert_allclose
by assert_close
by @johnnv1 in https://github.com/kornia/kornia/pull/2210
Augmentations
by @johnnv1 in https://github.com/kornia/kornia/pull/2215
Full Changelog: https://github.com/kornia/kornia/compare/v0.6.9...v0.6.10
Published by edgarriba almost 2 years ago
kornia.geometry.liegroup
by @edgarriba in https://github.com/kornia/kornia/pull/1960
Hyperplane
and Ray
API by @edgarriba in https://github.com/kornia/kornia/pull/1963
mypy
from running on tests by @johnnv1 in https://github.com/kornia/kornia/pull/1983
# type: ignore
from kornia.feature
by @johnnv1 in https://github.com/kornia/kornia/pull/1995
kornia.geometry.linalg.euclidean_distance
by @edgarriba in https://github.com/kornia/kornia/pull/2000
type: ignore
by @johnnv1 in https://github.com/kornia/kornia/pull/1998
match_smnn
by @anstadnik in https://github.com/kornia/kornia/pull/2020
kornia.augmentation
by @johnnv1 in https://github.com/kornia/kornia/pull/2028
get
method by @johnnv1 in https://github.com/kornia/kornia/pull/2047
RandomGaussianNoise
play nicely on GPU by @nitaifingerhut in https://github.com/kornia/kornia/pull/2050
license_file
by @johnnv1 in https://github.com/kornia/kornia/pull/2057
queued
and coverage upload by @johnnv1 in https://github.com/kornia/kornia/pull/2038
fast_mode
on grandchecks by @johnnv1 in https://github.com/kornia/kornia/pull/2069
RandomMotionBlur
is not deterministic when using self._params
by @nitaifingerhut in https://github.com/kornia/kornia/pull/2068
kornia.augmentation
by @johnnv1 in https://github.com/kornia/kornia/pull/2052
fail-fast
on CI by @johnnv1 in https://github.com/kornia/kornia/pull/2085
check_untyped_defs
on mypy by @johnnv1 in https://github.com/kornia/kornia/pull/2086
disallow_any_generics
on mypy by @johnnv1 in https://github.com/kornia/kornia/pull/2092
solve_cast
on torch 1.9 by @johnnv1 in https://github.com/kornia/kornia/pull/2066
TensorWrapper
, Vector3
, Scalar
and improvements in fit_plane
by @edgarriba in https://github.com/kornia/kornia/pull/1987
Full Changelog: https://github.com/kornia/kornia/compare/v0.6.8...v0.6.9
Published by edgarriba about 2 years ago
In this release in we include an experimental kornia.nerf
submodule with a high level API that implements a vanilla Neural Radiance Field (NeRF). Read more about the roadmap of this project: https://github.com/kornia/kornia/issues/1936 // contribution done by @YanivHollander
from kornia.nerf import NerfSolver
from kornia.geomtry.camera import PinholeCamera
camera: PinholeCamera = create_one_camera(5, 9, device, dtype)
img = create_red_images_for_cameras(camera, device)
nerf_obj = NerfSolver(device=device, dtype=dtype)
num_img_rays = 15
nerf_obj.init_training(camera, 1.0, 3.0, False, img, num_img_rays, batch_size=5, num_ray_points=10, lr=1e-2)
nerf_obj.run(num_epochs=10)
img_rendered = nerf_obj.render_views(camera)[0].permute(2, 0, 1)
Improvements, docs and tutorials soon!
Added kornia.contrib.EdgeDetection
API that implements dexined
: https://github.com/xavysp/DexiNed
import kornia as K
from kornia.contrib import EdgeDetection
edge_detection = EdgeDetector().to(device)
# preprocess
img = K.image_to_tensor(frame, keepdim=False).to(device)
img = K.color.bgr_to_rgb(img.float())
# detect !
with torch.no_grad():
edges = edge_detection(img)
img_vis = K.tensor_to_image(edges.byte())
After testing kornia LoFTR and AdaLAM under big load, our users and we have experiences some bugs in corners cases, such as big images or no input correspondences, which caused pipeline to crash. Not anymore!
See demos in our HuggingFace space: https://huggingface.co/kornia
We have added homography-from-line-segments solver, as well as various speed-ups. We are not yet at OpenCV RANSAC quality level, more improvements to come :) But the line-solver is pretty unique! We also have example in our tutorials https://kornia-tutorials.readthedocs.io/en/latest/line_detection_and_matching_sold2.html
We are slowly working on being able to run kornia on M1. So far we have added possibility to test locally on M1 and mostly report Pytorch MPS backend crashes in various use-cases. Once this work is finished, we may provide some workarounds to have kornia-M1
Implemented Quaternion.slerp to interpolate between quaternions using quaternion arithmetic -- contributed by @cjpurackal
import torch
from kornia.geometry.quaternion import Quaternion
q0 = Quaternion.identity(batch_size=1)
q1 = Quaternion(torch.tensor([[1., .5, 0., 0.]]))
q2 = q0.slerp(q1, .3)
add_weighted
to accept Tensors for alpha
/beta
/gamma
by @nitaifingerhut in https://github.com/kornia/kornia/pull/1868
EdgeDetection
api by @edgarriba in https://github.com/kornia/kornia/pull/1483
Full Changelog: https://github.com/kornia/kornia/compare/v0.6.7...v0.6.8
Published by edgarriba about 2 years ago
Contributed by SOLD2 original authors
AdaLAM works particularly well with kornia.feature.KeyNetAffNetHardNet
. AdaLAM is adopted from original author's implementation.
import matplotlib.pyplot as plt
import cv2
import kornia as K
import kornia.feature as KF
import numpy as np
import torch
from kornia_moons.feature import *
def load_torch_image(fname):
img = K.image_to_tensor(cv2.imread(fname), False).float() /255.
img = K.color.bgr_to_rgb(img)
return img
device = K.utils.get_cuda_device_if_available()
fname1 = 'kn_church-2.jpg'
fname2 = 'kn_church-8.jpg'
img1 = load_torch_image(fname1)
img2 = load_torch_image(fname2)
feature = KF.KeyNetAffNetHardNet(5000, True).eval().to(device)
input_dict = {"image0": K.color.rgb_to_grayscale(img1), # LofTR works on grayscale images only
"image1": K.color.rgb_to_grayscale(img2)}
hw1 = torch.tensor(img1.shape[2:])
hw2 = torch.tensor(img1.shape[2:])
adalam_config = {"device": device}
with torch.inference_mode():
lafs1, resps1, descs1 = feature(K.color.rgb_to_grayscale(img1))
lafs2, resps2, descs2 = feature(K.color.rgb_to_grayscale(img2))
dists, idxs = KF.match_adalam(descs1.squeeze(0), descs2.squeeze(0),
lafs1, lafs2, # Adalam takes into account also geometric information
config=adalam_config,
hw1=hw1, hw2=hw2) # Adalam also benefits from knowing image size
More - in our Tutorials section
Converting camera pose from (R,t) to actually pose in world coordinates can be a pain. We are relieving you from it, by implementing various conversion functions, such as camtoworld_to_worldtocam_Rt
, worldtocam_to_camtoworld_Rt
, camtoworld_graphics_to_vision_4x4
, etc. The conversions come with two variants: for (R,t)
tensor tuple, or with since extrinsics mat4x4
.
More geometry-related stuff! We have added Quaternion API to make work with rotation representations easy. Checkout the PR
>>> q = Quaternion.identity(batch_size=4)
>>> q.data
Parameter containing:
tensor([[1., 0., 0., 0.],
[1., 0., 0., 0.],
[1., 0., 0., 0.],
[1., 0., 0., 0.]], requires_grad=True)
>>> q.real
tensor([[1.],
[1.],
[1.],
[1.]], grad_fn=<SliceBackward0>)
>>> q.vec
tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]], grad_fn=<SliceBackward0>)
We recently included the RandomMosaic to mosaic image transforms and combine them into one output image. The output image is composed of the parts from each sub-image.
The mosaic transform steps are as follows:
>>> mosaic = RandomMosaic((300, 300), data_keys=["input", "bbox_xyxy"])
>>> boxes = torch.tensor([[
... [70, 5, 150, 100],
... [60, 180, 175, 220],
... ]]).repeat(8, 1, 1)
>>> input = torch.randn(8, 3, 224, 224)
>>> out = mosaic(input, boxes)
>>> out[0].shape, out[1].shape
(torch.Size([8, 3, 300, 300]), torch.Size([8, 8, 4]))
Thanks to @nitaifingerhut
!wget https://github.com/kornia/data/raw/main/drslump.jpg
import torch
import kornia
import cv2
import matplotlib.pyplot as plt
# read the image with OpenCV
img: np.ndarray = cv2.imread('./drslump.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# convert to torch tensor
data: torch.tensor = kornia.image_to_tensor(img, keepdim=False)/255. # BxCxHxW
data-=0.2*torch.rand_like(data).abs()
plt.figure(figsize=(12,8))
edge_blurred = kornia.filters.edge_aware_blur_pool2d(data, 19)
plt.imshow(kornia.tensor_to_image(torch.cat([data, edge_blurred],axis=3)))
total_variation
+ adding reduction
by @nitaifingerhut in https://github.com/kornia/kornia/pull/1815
Full Changelog: https://github.com/kornia/kornia/compare/v0.6.6...v0.6.7
Published by edgarriba over 2 years ago
First of integrations to revamp kornia.geometry
to align with Eigen and Sophus.
Docs: https://kornia.readthedocs.io/en/latest/geometry.line.html?#kornia.geometry.line.ParametrizedLine
See: example: https://github.com/kornia/kornia/blob/master/examples/geometry/fit_line2.py
load_image
Automated the packaging infra in kornia_rs
to handle multi architecture builds. Arm64 soon :)
See: https://github.com/kornia/kornia-rs
# load the image using the rust backend
img: Tensor = K.io.load_image(file_name, K.io.ImageLoadType.RGB32)
img = img[None] # 1xCxHxW / fp32 / [0, 1]
Created Kornia AI org under the HuggingFace platform.
Starting to port the tutorials under HuggingFace kornia org to rapidly show live docs and make community.
Link: https://huggingface.co/kornia
Demos:
EarlyStoppping
condition by @edgarriba in https://github.com/kornia/kornia/pull/1718
meshgrid
need indexing argument
by @FavorMylikes in https://github.com/kornia/kornia/pull/1629
project
and unproject
in PinholeCamera
by @YanivHollander in https://github.com/kornia/kornia/pull/1729
filter2D
filter3D
api by @edgarriba in https://github.com/kornia/kornia/pull/1725
rgb_to_y
by @nitaifingerhut in https://github.com/kornia/kornia/pull/1734
get_perspective_transform
by @edgarriba in https://github.com/kornia/kornia/pull/1767
KORNIA_CHECK_SAME_DEVICES
by @MrShevan in https://github.com/kornia/kornia/pull/1788
sphinxcontrib.gtagjs
to track docs by @edgarriba in https://github.com/kornia/kornia/pull/1790
ParametrizedLine
and fit_line
by @edgarriba in https://github.com/kornia/kornia/pull/1794
Full Changelog: https://github.com/kornia/kornia/compare/v0.6.5...v0.6.6
Published by edgarriba over 2 years ago
kornia.io
and implement load_image
with rust (#1701)diamond_square
and plasma augmentations: RandomPlasmaBrightness
, RandomPlasmaContrast
, RandomPlasmaShadow
(#1700)RandomRGBShift
augmentation (#1694)adjust_sigmoid
and adjust_log
initial implementation (#1685)MS_SSIMLoss
(#1655)torch.uint8
(#1705)π©βπ» π¨βπ» We would like to thank all contributors for this new release !
@Jonas1312 @nitaifingerhut @qwertyforce @ashnair1 @ducha-aiki @z0gSh1u @simon-schaefer @shijianjian @edgarriba @HJoonKwon @ChristophReich1996 @Tanmay06 @dobosevych @miquelmarti @Oleksandra2020
If we forgot someone let us know π
Published by edgarriba over 2 years ago
draw_convex_polygon
(#1636)return_transform
, enabled 3D augmentations in AugmentionSequential (#1590)π©βπ» π¨βπ» We would like to thank all contributors for this new release !
@ducha-aiki @edgarriba @shijianjian @juliendenize @ashnair1 @KhaledSharif @Parskatt @shazhou2015 @JoanFM @nrupatunga @kristijanbartol @miquelmarti @riegerfr @nitaifingerhut @dichen-cd @lamhoangtung @hasibzunair @wendy-xiaozong @rsomani95 @huuquan1994 @twsl
If we forgot someone let us know π
Published by edgarriba over 2 years ago
π©βπ» π¨βπ» We would like to thank all contributors for this new release !
@ducha-aiki @edgarriba @shijianjian @julien-blanchon @lferraz @miquelmarti @twsl @nitaifingerhut @eungbean @aaroswings @huuquan1994 @rsomani95
If we forgot someone let us know π
Published by edgarriba almost 3 years ago
ObjectDetectorTrainer
(#1414)OneOf
documentation (#1443)warp_perspective
(#1452)draw_line
image utility (#1456)π©βπ» π¨βπ» We would like to thank all contributors for this new release !
@ducha-aiki @edgarriba @chinhsuanwu @chinhsuanwu @dobosevych @shijianjian @rvorias @rvorias @fmiotello @hal-314 @trysomeway @miquelmarti @calmdown13 @twsl Abdelrhman-Hosny
If we forgot someone let us know π
Published by edgarriba almost 3 years ago
0.6.1
)Published by edgarriba almost 3 years ago
0.6.0
)Release time: 2021-10-22
π©βπ» π¨βπ» We would like to thank all contributors for this new release !
@AK391 @cclauss @edgarriba @ducha-aiki @isaaccorley @justanhduc @jatentaki @shijianjian @shiyangc-intusurg @SravanChittupalli @thatbrguy @nvshubhsharma @PWhiddy @oskarflordal @tacoelho @YanivHollander @jhacsonmeza
If we forgot someone let us know π
Published by edgarriba about 3 years ago
0.5.11
)Release time: 2021-09-19
π©βπ» π¨βπ» We would like to thank all contributors for this new release !
@Abdelrhman-Hosny @ducha-aiki @edgarriba @EStorm21 @lyhyl @shijianjian @thatbrguy
If we forgot someone let us know π
Published by edgarriba about 3 years ago
@bkntr @bsuleymanov @ducha-aiki @edgarriba @hal-314 @kingsj0405 @shijianjian
If we forgot someone let us know π
Published by shijianjian about 3 years ago
@bsuleymanov @dhernandez0 @ducha-aiki
If we forgot someone let us know π
Published by edgarriba over 3 years ago
@dkoguciuk @edgarriba @lferraz @shijianjian
If we forgot someone let us know π