Client program which retrieve the frame from the camera, detect and track the persons and send the extracted features to the MQTT broker.
GPL-3.0 License
Client program which retrieve the frame from the camera, detect and track the persons and send the extracted features to the MQTT broker.
After cloning the project, just run the udpade script in order to keep the project up-to-date.
chmod +x update.sh
./update.sh
cd build/
./ClientCamera
The data are saved on a subfolder on the same level as the root folder (in order to insure the communication with the other programs). The path structure must look like this:
Moreover the folder ClientCamera must contain a video.yml indicating the url of the camera or files that have to be loaded.
Here is is an example of video.yml:
%YAML:1.0
videoNames:
- '/home/user/Data/Recordings/00_Vid_1.mp4'
- '/home/user/Data/Recordings/00_Vid_2.mp4'
- 'http://login:[email protected]/axis-cgi/mjpg/video.cgi?;type=.mjpg'
trackingHog: 1
recordingVid: 0
recordingTrace: 1
hideGui: 0
lifeTime: 800
Use the hideGui option for the release version (record in background without showing anything). lifeTime represent the number of minute before the program automatically shutdown. trackingHog indicate to the program if it has to use HoG for detecting the pedestrian instead of a simple background detection. recordingVid simply record the video as it is showed at the screen.
Warning: If recordingTrace is on, all the previous recorded content inside the Traces/ directory will be removed when the program is launch and replaced by the new recording.
Each person detected is recorded if it has been tracked enough time (at least X frames). The list of all the recorded sequences is contain in the Traces/ folder on the file traces.txt.
Each sequence recorded is defined by an id of the form XX_YY where XX is the client number id (found in the ClientOnlineCamera/ directory) and YY is the number of sequence for this particular client. That mean that each sequence is guaranteed to have a unique id even if the camera are divided among multiple clients. The sequence contain the following elements:
You can manually label the sequences (in order to train or evaluate the reidentification algorithm) by adding a label(without space!) to the sequence id on the output trace.txt file. Here for example:
----- 0_1 -----
0_1_cam
0_1_pos
----- 0_9 -----
0_9_cam
0_9_pos
----- 0_11 -----
0_11_cam
0_11_pos
...
Could become:
----- 0_1:Sophia -----
0_1_cam
0_1_pos
----- 0_9:Kotono -----
0_9_cam
0_9_pos
----- 0_11:Sophia -----
0_11_cam
0_11_pos
...
The labelization process could be assisted assisted by the computer by using the NetworkVisualizer.
There is an optional calibration to track the path of the differents sequences. Here is how to use it. It require an additionnal subfolder calibration/ on the same level that the src/ or the build/ directories. This folder has to contain the following images:
The program will automatically do the correspondance between the position inside the camera and position on the map thanks to the calibration points. The result is saved (if recoding enabled) on the Traces file as map.png. The 4 calibration points have to be exactly one pixel of those color: Red (255,0,0), Magenta (255,0,255), Yellow (255,255,0), green (0,255,0). Of course equivalent position on the map and on the camera have to be of the same color.