Self-hosted, local only NVR and AI Computer Vision software. With features such as object detection, motion detection, face recognition and more, it gives you the power to keep an eye on your home, office or any other place you want to monitor.
MIT License
Published by roflcoopter about 4 years ago
Breaking changes
type: edgetpu
and supplied an unsupported configuration option, such as suppression
,discovery_prefix
you now have to move this under home_assistant
like this:
mqtt:
broker: <ip address or hostname of broker>
port: <port the broker listens on>
home_assistant:
discovery_prefix: <yourcustomprefix>
Changes and new Features
object_detection:
labels:
- label: person
confidence: 0.8
post_processor: face_recognition
post_processors:
face_recognition:
count
attribute which tells you the number of detected objects.object_detection:
model_path: /detectors/models/darknet/yolov3-tiny.weights
model_config: /detectors/models/darknet/yolov3-tiny.cfg
publish_image: true
is now clearerobject_detection:
log_all_objects: true
If set to true, all found objects will be logged if loglevel is DEBUG (this is the behaviour today). The default is false which will only log objects that pass labels filters.ffmpeg_loglevel
under cameras
. Use this to debug your camera decoding command.ffmpeg_recoverable_errors
under cameras
. See the README for a detailed explanation.Fixes
publish_image: true
and trigger_detector: false
interval
under motion_detection
and object_detection
not allowing floatsDocker images are available on Docker Hub
roflcoopter/viseron:1.6.0
roflcoopter/viseron-cuda:1.6.0
roflcoopter/viseron-vaapi:1.6.0
roflcoopter/viseron-rpi:1.6.0
Published by roflcoopter about 4 years ago
Changes and new Features
object_detection:
log_all_objects: true
If set to true, all found objects will be logged if loglevel is DEBUG (this is the behaviour today). The default is false which will only log objects that pass labels filters.Fixes
Docker images are available on Docker Hub
roflcoopter/viseron:1.6.0b2
roflcoopter/viseron-cuda:1.6.0b2
roflcoopter/viseron-vaapi:1.6.0b2
roflcoopter/viseron-rpi:1.6.0b2
Published by roflcoopter about 4 years ago
Breaking changes
type: edgetpu
and supplied an unsupported configuration option, such as suppression
,discovery_prefix
you now have to move this under home_assistant
like this:
mqtt:
broker: <ip address or hostname of broker>
port: <port the broker listens on>
home_assistant:
discovery_prefix: <yourcustomprefix>
Changes and new Features
object_detection:
labels:
- label: person
confidence: 0.8
post_processor: face_recognition
post_processors:
face_recognition:
count
attribute which tells you the number of detected objects.object_detection:
model_path: /detectors/models/darknet/yolov3-tiny.weights
model_config: /detectors/models/darknet/yolov3-tiny.cfg
publish_image: true
is now clearerFixes
publish_image: true
and trigger_detector: false
interval
under motion_detection
and object_detection
not allowing floatsDocker images are available on Docker Hub
roflcoopter/viseron:1.6.0b1
roflcoopter/viseron-cuda:1.6.0b1
roflcoopter/viseron-vaapi:1.6.0b1
roflcoopter/viseron-rpi:1.6.0b1
Published by roflcoopter about 4 years ago
Lots of goodies in this one!
Breaking changes
area
for motion_detection
is now a percentaged based value and needs to be changed from an int to a float.width
and/or height
, area
wont be affected.Changes and new Features
Masks can now be configured under each cameras motion_detection
block.
Masks are used to limit motion detection from running in a specified area.
The configuration is similar to how zones are configured, here is an example:
cameras:
- name: name
host: ip
port: port
path: /Streaming/Channels/101/
motion_detection:
area: 0.07
mask:
- points:
- x: 0
y: 0
- x: 250
y: 0
- x: 250
y: 250
- x: 0
y: 250
- points:
- x: 500
y: 500
- x: 1000
y: 500
- x: 1000
y: 750
- x: 300
y: 750
The masks are drawn on the image published over MQTT. they have an orange border with a black background with 70% opacity.
A switch entity is now created in Home Assistant which can be used to arm/dsiarm a camera.
When disarmed, the decoder stops completly so no system load will be used.
Motion contours are now drawn on the image that is being published over MQTT.
Dark purple contours are smaller than the configured area
, while bright pink contours are larger than the configured area
.
New config option max_timeout
under motion_detection
which specifies the max number of seconds that motion alone is allowed to keep the recorder going.
This is used to prevent never-ending recordings when motion detection is too sensitive
New config option rtsp_transport
under cameras
. Change this if your camera doesnt support tcp.
VA-API is now installed in the CUDA image for use with ffmpeg decoding/encoding
Fixes
Docker images are available on Docker Hub
roflcoopter/viseron:latest
roflcoopter/viseron-cuda:latest
roflcoopter/viseron-vaapi:latest
roflcoopter/viseron-rpi:latest
Published by roflcoopter about 4 years ago
Changes and new Features
Zones are here! You can now configure zones for each camera and specify labels to track per zone.
Here is an example:
cameras:
- name: name
host: ip
port: port
path: /Streaming/Channels/101/
zones:
- name: zone1
points:
- x: 0
y: 500
- x: 1920
y: 500
- x: 1920
y: 1080
- x: 0
y: 1080
labels:
- label: person
confidence: 0.9
- name: zone2
points:
- x: 0
y: 0
- x: 500
y: 0
- x: 500
y: 500
- x: 0
y: 500
labels:
- label: cat
confidence: 0.5
A polygon will be drawn on the image using each point. Atleast 3 points have to be supplied.
If you are using Home Assistant, Viseron will publish an image to the camera entity over MQTT
with zones and objects drawn upon it.
The drawing and publishing takes some processing power so it should only be used for debugging and tuning.
A boatload of new binary sensors are now created, tracking objects and zones. Checkout the README for a detailed explanation.
Allows a logging
block to be entered per camera.
Logging from motion detection is now named per camera.
The logger for the recorder is now named per camera.
You can now set log level individually for motion_detection and object_detection, either globally or for each camera.
New config option publish_image
.
If enabled, Viseron will publish an image to the MQTT camera entity with objects and zones drawn upon it.
You can now specify width
, height
and fps
individually in the camera config.
New config option triggers_recording
for labels, if set to false, only the binary sensors in mqtt will update but no recording will start. This works on all labels configs (global object detector, per camera or per zone)
Recorded videos will now be saved under a specific folder per camera.
Fixes
timeout
.Docker images are available on Docker Hub
roflcoopter/viseron
roflcoopter/viseron-cuda
roflcoopter/viseron-vaapi
roflcoopter/viseron-rpi
Published by roflcoopter about 4 years ago
Changes and new Features
logging
block to be entered per camera.publish_image
. width
, height
and fps
individually in the camera config.Fixes
Docker images are available on Docker Hub
roflcoopter/viseron:dev
roflcoopter/viseron-cuda:dev
roflcoopter/viseron-vaapi:dev
roflcoopter/viseron-rpi:dev
Published by roflcoopter about 4 years ago
Changes and new Features
Zones are here! Functionality is somewhat limited atm but i need some testers on this as its quite a big refactor.
You can now configure zones for each camera and specify labels to track per zone.
This is not reflected in the documentation just yet, but here is an example:
cameras:
- name: name
host: ip
port: port
path: /Streaming/Channels/101/
zones:
- name: zone1
points:
- x: 0
y: 500
- x: 1920
y: 500
- x: 1920
y: 1080
- x: 0
y: 1080
labels:
- label: person
confidence: 0.9
- name: zone2
points:
- x: 0
y: 0
- x: 500
y: 0
- x: 500
y: 500
- x: 0
y: 500
labels:
- label: cat
confidence: 0.5
A polygon will be drawn on the image using each point. Atleast 3 points have to be supplied.
If you are using Home Assistant Viseron will publish an image to the camera entity over MQTT
with zones and objects drawn upon it.
The drawing and publishing takes some processing power so in the coming beta releases this will be configurable.
A few new binary sensors will also be created, one for each zone and one for each label in the zone.
The zone binary sensor will turn on when atleast one tracked object is in the zone.
The label binary sensor will turn on when atleast one matching object is in the zone.
Docker images are available on Docker Hub
roflcoopter/viseron:dev
roflcoopter/viseron-cuda:dev
roflcoopter/viseron-vaapi:dev
roflcoopter/viseron-rpi:dev
Published by roflcoopter about 4 years ago
Changes and new Features
stream_format: mjpeg
to your camera configuration.Fixes
interval
for object detector and motion detector. It now allows floats.lookback: 0
.codec
being overwritten bu default value.Docker images are available on Docker Hub
roflcoopter/viseron
roflcoopter/viseron-cuda
roflcoopter/viseron-vaapi
roflcoopter/viseron-rpi
Published by roflcoopter about 4 years ago
Changes and new Features
Fixes
Docker images are available on Docker Hub
roflcoopter/viseron
roflcoopter/viseron-cuda
roflcoopter/viseron-vaapi
roflcoopter/viseron-rpi
Published by roflcoopter about 4 years ago
Breaking changes
Object detecion config has changed significantly.
You now specify confidence and min/max height/width per label, example:
labels:
- label: person
confidence: 0.9
- label: truck
Changes
Fixes
Docker images are available on Docker Hub
roflcoopter/viseron
roflcoopter/viseron-cuda
roflcoopter/viseron-vaapi
roflcoopter/viseron-rpi
Published by roflcoopter about 4 years ago
First release!
This is the first release of Viseron.
More features will be added as i go along.
Docker images are available on Docker Hub
roflcoopter/viseron
roflcoopter/viseron-cuda
roflcoopter/viseron-vaapi
roflcoopter/viseron-rpi
I hope you find this useful.