๐ธ A powerful, high-performance React Native Camera library.
MIT License
Bot releases are visible (Hide)
Published by mrousavy about 1 year ago
Woohoo, another VisionCamera release! โจ Highlights:
VisionCamera 3.4.0 features an all new atomic locking Core/
implementation on iOS.
Previously, the Camera Device would be locked, configured, and unlocked again for every prop you set (device
, format
, hdr
, fps
, ...), which would obviously cost a lot of time. ๐
Now, VisionCamera batches all of those calls under a single configure(...)
call, effectively locking the Camera Device only once, which makes VisionCamera up 7x faster! ๐ฅณ (see https://github.com/mrousavy/react-native-vision-camera/issues/1975 for more details)
On Android, the performance of the CodeScanner
has also been improved by a lot and a stalling issue has been fixed.
Additionally, 3.4.0 fixes a few other small issues related to torch, orientation, memory cleaning and navigation.
[!IMPORTANT]
3.4.0 now requires ๐ง Xcode 15 or higher.
Core/
library ๐ฅณ (#1975) (cd0b413)CodeScanner
stalling on Android ๐ฅณ (#2009) (e8ae11e)outputs
in destroy()
(10a44d5)CaptureSession
fully synchronously under Mutex (#1972) (18b30cd)CaptureSession
configuration (18e6926)CameraSession
in onDetachedFromWindow()
(#1962) (02726d4)focus()
on iOS (#1943) (a4448c3)Published by mrousavy about 1 year ago
Published by mrousavy about 1 year ago
enableCodeScanner
to Expo Config Plugin (ffd64fe)string-hash-64
dependency (fab631d)minSdkVersion
of 26 again (5969992)Published by mrousavy about 1 year ago
Published by mrousavy about 1 year ago
The most requested feature is here: VisionCamera 3.3.0 now finally contains a QR-code/Barcode scanner! ๐
Check out the Code Scanner documentation.
.cxx/
) on clean (4fc8cd2)+load
not available in Xcode 15 error (#1908) (1cdc3d1)CamcorderProfile
get crash on Samsung devices (#1907) (83c0cdb)dng
PixelFormat (d465c37)minSdkVersion
to 23 (#1911) (324e269)runAsync
example (ce07750)Published by mrousavy about 1 year ago
Published by mrousavy about 1 year ago
Published by mrousavy about 1 year ago
VisionCamera 3.2.0 is another big release, I spent another 150 hours getting this out!
Huge shoutout to all my sponsors, thank you for supporting me! โค๏ธ
Three major changes:
videoBitRate
for recording videos! ๐ Check out the video bit rate documentation.videoBitRate
) (#1882) (902dc63)ImageWriter
into OpenGL pipeline (#1874) (954b448)AHardwareBuffer*
for frame.toArrayBuffer()
(#1888) (cf1952d)ImageWriter
into OpenGL pipeline (#1874) (954b448)AHardwareBuffer*
for frame.toArrayBuffer()
(#1888) (cf1952d)getCameraDevice
to return undefined
when no Devices are available (e.g. iOS Simulator) (#1848) (f7428f2)preferredDevice
(fb6ebd9), closes #1870
Published by mrousavy about 1 year ago
VisionCamera 3.1.0 features a ton of changes including an all new devices API (useCameraDevice(..)
), a new formats API (useCameraFormat(..)
), USB Cameras support, resizeMode
for the Preview, performance improvements, buffer compression, and an all new rewritten documentation!
I spent around 100 hours on this release, so if you appreciate my work please consider ๐ sponsoring me on GitHub or ๐ช buy me a Ko-Fi :)
The new documentation is live at react-native-vision-camera.com.
Warning: There are breaking changes to the device selection APIs. The new Device APIs look like this:
const device = useCameraDevice('back')
const format = useCameraFormat(device, [
{ fps: 60 },
{ videoResolution: 'max' }
])
return <Camera {...props} device={device} />
useCameraDevice
and useCameraFormat
(#1784) (977b859) / (#1841) (2d96381)getAvailableCameraDevices()
synchronous (#1784) (977b859)addCameraDevicesChanged(...)
listener (#1784) (977b859)resizeMode
prop for Preview (cover
/contain
) (#1838) (3169444) / (#1817) (c0b80b3) with @blanchamuseCameraPermission()
and useMicrophonePermission()
hooks (#1823) (327aade)enableBufferCompression
) (#1828) (fffefa9)Templates
API for choosing Camera Formats (#1844) (706341f)useCameraFormats
API (#1841) (2d96381)any[]
-> List<Object>
in FP Android (#1760) (b4b0e49)h264
videoCodec type for RecordVideoOptions (#1808) (18c7034) by @iketiunnuserPreferredCameraDevice
on Android (aafffa6)getAvailableCameraDevices()
synchronous/instant (no more await!) (#1784) (977b859)Published by mrousavy about 1 year ago
Published by mrousavy about 1 year ago
Published by mrousavy about 1 year ago
This is the third major version for VisionCamera, VisionCamera V3, which features a full codebase rewrite on Android, and a huge refactor on iOS to make it more stable, more flexible, and more performant than ever!
[!NOTE]
If you want to keep using V2, I plan to provide limited support to V2 on the V2 branch here.
VisionCamera V3 has been an intense journey for me, I spent over 700 hours in total to build VisionCamera V3 and make it as fast and powerful as possible. Lots of research went into this, and writing a custom OpenGL GPU pipeline in C++ is far from easy - there ain't no documentation about this online at all!
Here's the original V3 issue/discussion board: https://github.com/mrousavy/react-native-vision-camera/issues/1376
If you appreciate what I'm doing in VisionCamera, please ๐ consider sponsoring me on GitHub ๐ or ๐ช buy me a Ko-Fi ๐ช to show your support. Thank you!
These are some of the major features:
On Android, there's three APIs for using the Camera: Camera1 (deprecated), Camera2 and CameraX. Camera2 is known for being insanely hard to use, so Google built CameraX, a library which uses Camera2 under the hood but significantly simplifies it to make it easier to use.
This sounds great at first, so I used CameraX for VisionCamera V1 and V2. Unfortunately due to their simplifications (and it being immature), a lot of features that worked on iOS were simply broken or not working on Android.
In V3 I now rewrote the entire Android codebase from CameraX to the lower-level Camera2 library, which allows for many new great features:
videoWidth
/videoHeight
and photoWidth
/photoHeight
sizes! โ
androidx.camera:+
๐ฅBecause this is a full rewrite to a much lower-level library on Android, there might be some things that broke. Please make sure to report an issue if you spot such things.
Currently, these are the things that are not yet working on Android:
On Android I built a custom GPU OpenGL video pipeline in C++ that will handle the rendering of the input Frames to multiple output Surfaces (currently only Video Recordings and Frame Processing). This is roughly how it works:
Camera-->GL_TEXTURE_EXTERNAL_OES;
GL_TEXTURE_EXTERNAL_OES -- PassThroughShader -->FP[Frame Processor Output EGLSurface];
GL_TEXTURE_EXTERNAL_OES -- PassThroughShader -->VR[Video Recorder Output EGLSurface];
The FP and VR output surfaces can be swapped at any point, meaning adding or removing a Frame Processor is much more performant. Also, this implicitly handles resizing the buffers to match the output dimensions fully automatically on the GPU, making it insanely fast.
The main benefit of this pipeline is that we no longer need to attach two outputs to the Camera but only one - the OpenGL VideoPipeline itself. This introduces two new features on Android:
supportsParallelProcessing
prop is gone since this is now always supported! ๐๐คฉpixelFormat
prop has been added which allows you whether to stream rgb
, yuv
or native
frames in a Frame Processor.enableShutterSound
has been added to takePhoto()
to play or mute the sound on photo captureframe.toByteArray()
. This is pretty efficient and can be used to process the raw pixels using libraries like react-native-fast-tflite.pixelFormat
(either yuv
, rgb
or native
)orientation
isMirrored
timestamp
sensorOrientation
(the orientation that you have to rotate by to get to the device neutral portrait orientation)hardwareLevel
h264
or h265
(HEVC) ๐ฅminFps
/maxFps
in favour of frameRateRanges
Frame Processors now use react-native-worklets-core in favor of react-native-reanimated. With this refactor, there's a few changes:
// `model` is a C++ JSI HostObject
const model = useTensorflowModel(require('assets/face-detector.tflite'))
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
const pixels = frame.toArrayBuffer()
// `model` can be used in this 'worklet' without copying anything!
const faces = model.run(pixels)
}, [model])
This makes it much easier for general purpose processing libraries to be used inside Frame Processors. For this example, TFLite can run any .tflite
model with your Camera, all from JS while still being powered by C++/GPU.runAsync(..)
:
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log('New Frame')
runAsync(frame, () => {
'worklet'
const faces = detectFaces(frame)
const face = [faces0]
console.log(`Detected a new face: ${face}`)
})
})
New Frames can stream in ('New Frame'
being logged) while the async context is still executing detectFaces
fully in parallel.runAtTargetFps(.., fps)
:
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log('New Frame')
runAtTargetFps(5, () => {
'worklet'
const faces = detectFaces(frame)
console.log(`Detected a new face: ${faces[0]}`)
})
})
In this case, the face detector will only be called 5 times per second.Frame Processor Plugins are now object oriented and can be initialized from JS with custom options. This will allow you to pass options like which model to use, fast or accurate, to a native FP plugin like a face-detection or pose-detection algorithm.
Old syntax:
export function examplePlugin(frame: Frame) {
'worklet'
return VisionCameraPlugins.__examplePlugin(frame)
}
New syntax:
const plugin = VisionCameraProxy.getFrameProcessorPlugin('example_plugin')
export function examplePlugin(frame: Frame) {
'worklet'
return plugin.call(frame)
}
And the getFrameProcessorPlugin
can also accept options, which is a NSDictionary on iOS in the init:
call and a Dictionary<>
on Android in the constructor.
See the ExampleFrameProcessorPlugin.m
/ExampleFrameProcessorPlugin.java
for the native changes.
If you followed the V3 journey you might know that originally I planned to add Skia support for VisionCamera, allowing you to draw onto a Frame in realtime.
This was possible until VisionCamera V3 RC.9 with an amazingly simple API:
const paint = Skia.Paint()
paint.setColor('red')
const frameProcessor = useSkiaFrameProcessor((frame) => {
'worklet'
const faces = detectFaces(frame)
faces.forEach((face) => {
const rect = Skia.Rect(face.x, face.y, face.width, face.height)
frame.drawRect(face.rectangle, paint)
})
}, [paint])
Or, to implement color filters (VHS filter, sepia, beauty, invert colors, ...) you could simply use Skia Shaders:
const INVERTED_COLORS_SHADER = `
uniform shader image;
half4 main(vec2 pos) {
vec4 color = image.eval(pos);
return vec4(1.0 - color.rgb, 1.0);
}
`
const imageFilter = Skia.ImageFilter.MakeRuntimeShader(INVERTED_COLORS_SHADER)
const paint = Skia.Paint()
paint.setImageFilter(imageFilter)
const frameProcessor = useSkiaFrameProcessor((frame) => {
'worklet'
frame.render(paint)
}, [])
...and the resulting texture that was rendered would also be written to a video or photo file if you started capturing.
Buuuuuuut I decided to remove Skia support from VisionCamera, as the codebase just got way to complex for me to maintain the two pipelines (one skia and one without skia). See this PR for more information on that: https://github.com/mrousavy/react-native-vision-camera/pull/1740 (this even includes the entire code for that lol)
If you/your business wants this, reach out to me/us through our website margelo.io and we can build a customized Camera solution for you - it works, it's just not suitable for the VisionCamera repo, I want to keep that lean. On our website we even have a demo with a custom solution for one of our clients, Stori, which implements realtime face filters just like on Snapchat.
pixelFormat
property to Camera (df5718d)*NativeMap
and *NativeArray
with Map<K,V>
and List<T>
for faster JSI -> JNI calls (#1720) (dfb86e1)ImageReader
and use YUV Image Buffers in Skia Context (#1689) (d38ba59)runAsync
and runAtTargetFps
) (#1472) (30b5615)enableShutterSound
prop to takePhoto()
๐ (#1702) (a46839a)enableZoomGesture
on Android (efe6556)focus()
on Android (#1713) (23af74a)ByteBuffer
for much faster toArrayBuffer()
โก (521d7c8)Frame
as if it was a Skia Canvas (#1479) (12f850c), closes #1487
fpsGraph
prop to show a debug view of the current FPS the Camera is drawing at (#1479
previewType
prop to switch between native OS preview and the Skia Canvas preview view (#1479
toByteArray()
, orientation
, isMirrored
and timestamp
to Frame
(#1487)CameraDevice
+ CameraFormat
detection using CameraX (#1495) (0d83a13)namespace
in build.gradle) (7ae15af)VisionCameraProxy
object, make FrameProcessorPlugin
s object-oriented (#1660) (44ed42d)CameraDevice
+ CameraFormat
detection using CameraX (#1495) (0d83a13)VisionCameraProxy
+ JFrame
) (#1661) (86dd703)<regex>
header (0635d4a)global.FrameProcessorPlugins
TS error (1f7a2e0)device == null
error (f227a3e)runAtTargetFps
for multiple invocations per FP (af4e366).so
libraries in package (ad5d64b)jsi::Runtime
's lifecycle (#1488) (0c3cd66)global.expo.modules
for JSI expo modules (a1af891)device == null
error (f227a3e)runAtTargetFps
for multiple invocations per FP (af4e366).so
libraries in package (ad5d64b)jsi::Runtime
's lifecycle (#1488) (0c3cd66).m
(bc9c157)global.expo.modules
for JSI expo modules (a1af891)make_shared
not working on FrameHostObject
(1197df7)not-determined
on Android (debe751)Orientation
(c7e4756)GrMTLHandle
import (390f48d)JByteBuffer
(3a0d7b3)make_shared
not working on FrameHostObject
(1197df7)FrameProcessorPlugins.ts
(c88605e)build.gradle
(07ba0e1)ByteBuffer
(86468e3)pixelFormat
property on iOS (dfee3b1)node_modules/
directory detection (66c012f)toArrayBuffer()
(e036b31)package.json
(#1728) (3b04757)Published by mrousavy about 1 year ago
Published by mrousavy about 1 year ago
*NativeMap
and *NativeArray
with Map<K,V>
and List<T>
for faster JSI -> JNI calls (#1720) (dfb86e1)Published by mrousavy about 1 year ago
Published by mrousavy about 1 year ago
enableShutterSound
prop to takePhoto()
๐ (#1702) (a46839a)enableZoomGesture
on Android (efe6556)focus()
on Android (#1713) (23af74a)ByteBuffer
for much faster toArrayBuffer()
โก (521d7c8)GrMTLHandle
import (390f48d)JByteBuffer
(3a0d7b3)make_shared
not working on FrameHostObject
(1197df7)FrameProcessorPlugins.ts
(c88605e)build.gradle
(07ba0e1)ByteBuffer
(86468e3)Failed to parse camera Id
error by ignoring non-integer cameras (#1428) (8833ac1)HostTimeClock
as fallback if masterClock
is nil
(#1302) (b8527d7)og:image
(1bd21a8)Published by mrousavy about 1 year ago
This is the first RC for the fully rewritten Android codebase which now uses Camera2 instead of CameraX.
There might be a few bugs here and there to iron out, but overall the rewrite allows for much more flexibility, performance and new features for VisionCamera.
pixelFormat
property to Camera (df5718d)ImageReader
and use YUV Image Buffers in Skia Context (#1689) (d38ba59)make_shared
not working on FrameHostObject
(1197df7)not-determined
on Android (debe751)Orientation
(c7e4756)Published by mrousavy about 1 year ago
Published by mrousavy about 1 year ago
useSkiaFrameProcessor
Old syntax:
export function examplePlugin(frame: Frame) {
'worklet'
return VisionCameraPlugins.__examplePlugin(frame)
}
New syntax:
const plugin = VisionCameraProxy.getFrameProcessorPlugin('example_plugin')
export function examplePlugin(frame: Frame) {
'worklet'
return plugin.call(frame)
}
And the getFrameProcessorPlugin
can also accept options, which is a NSDictionary on iOS in the init:
call.
namespace
in build.gradle) (7ae15af)VisionCameraProxy
object, make FrameProcessorPlugin
s object-oriented (#1660) (44ed42d)CameraDevice
+ CameraFormat
detection using CameraX (#1495) (0d83a13)VisionCameraProxy
+ JFrame
) (#1661) (86dd703)device == null
error (f227a3e)runAtTargetFps
for multiple invocations per FP (af4e366).so
libraries in package (ad5d64b)jsi::Runtime
's lifecycle (#1488) (0c3cd66).m
(bc9c157)global.expo.modules
for JSI expo modules (a1af891)Published by mrousavy about 1 year ago
og:image
(1bd21a8)