Cross-platform Python playback library for Azure Kinect MKV files.
GPL-2.0 License
Cross-platform Python playback library for Azure Kinect MKV files.
Calibration Example
It is possible to playback Azure Kinect videos files (mkv) without using the official SDK. This allows the software to be used on systems where the depth engine is not implemented, such as MacOS. The library currently only supports the playback of mkv files and does not provide direct access to the Azure Kinect device.
The following functions are currently supported:
pip install open-azure-kinect
In order to load an MKV file, it is necessary to create a new instance of the OpenK4APlayback
class. Note that if the is_looping
flag is set, the stream will not stop playing at the EOF of the stream. It will automatically close and reopen the file.
from openk4a.playback import OpenK4APlayback
azure = OpenK4APlayback("my-file.mkv")
azure.is_looping = True # set loop option if necessary
azure.open()
After that, it is possible to read the available stream information.
for stream in azure.streams:
print(stream)
# print clip duration
print(azure.duration_ms)
And read the actual capture information (image data).
while capture := azure.read():
# read color frame as numpy array
color_image = capture.color
# print current timestamp in ms (of the video timeline)
print(azure.timestamp_ms)
With seek(timestamp_ms: int)
it is possible to jump to a specific position in the video. The current implementation is not very efficient as the library just skips frames until the timestamp is reached. In the future, this should be replaced with a ffmpeg controlled seek.
# jump +1 second into the future
azure.seek(azure.timestamp_ms + 1000)
To access the calibration data of the two cameras (Color
, Depth
), use the parsed information property.
color_calib = azure.color_calibration
depth_calib = azure.depth_calibration
The class CameraTransform
handles the transformation task between the different cameras.
⚠️ Be aware that this part of the framework is still very much under development! And the methods are not as accurate as the Azure Kinect SDK because some optimisations have not been taken into account yet. Please open a PR if you like to improve it.
import numpy as np
from openk4a.transform import CameraTransform
estimated_depth_mm = 1500 # adjust this value to improve the calculation accuracy
transform = CameraTransform(azure.color_calibration, azure.depth_calibration, estimated_depth_mm)
# transform points from color to depth image
depth_points = transform.transform_2d_color_to_depth(np.array([[300, 400], [200, 200]]))
# transform color image into depth image
transformed_color = transform.align_image_depth_to_color(color_image)
To run the examples or develop the library please install the dev-requirements.txt
and requirements.txt
.
pip install -r dev-requirements.txt
pip install -r requirements.txt
There is already an example script demo.py which provides insights in how to use the library.
Thanks to tikuma-lsuhsc for creating python-ffmpegio and helping me extract the Azure Kinect data.