Bot releases are visible (Hide)
Published by matiasdelellis 4 months ago
All notable changes to this project will be documented in this file.
Published by matiasdelellis 5 months ago
All notable changes to this project will be documented in this file.
For more information about the model, you can see the official website:
Published by matiasdelellis 6 months ago
Published by matiasdelellis about 1 year ago
To the happiness of many (Issue https://github.com/matiasdelellis/facerecognition/issues/690, https://github.com/matiasdelellis/facerecognition/issues/688, https://github.com/matiasdelellis/facerecognition/issues/687, https://github.com/matiasdelellis/facerecognition/issues/685, https://github.com/matiasdelellis/facerecognition/issues/649, https://github.com/matiasdelellis/facerecognition/issues/632, https://github.com/matiasdelellis/facerecognition/issues/627, https://github.com/matiasdelellis/facerecognition/issues/625, etc..?), Implement the Chinese Whispers Clustering algorithm in native PHP, just means that we do not depend on the pdlib extension, but it goes without saying that its use is still highly recommended.
So, the application can be installed without pdlib or bzip2 installed. But if you want to use models 1, 2, 3, or 4 you still have to rely on these extensions.
Do you insist on not installing these extensions?.
You must configure the external model and select model 5 here, and thus free yourself from these extensions. π¬
Well, You will understand that it is slower, however I must admit that with JIT enabled, it is quite acceptable, and this is the only reason why decided to publish it.
Just I added 2162 Big Bang Theory photos on my test server, resulting in 6059 faces, and I cluster them with both implementations..
Dlib: (Reference)
PHP:
Time:
=> 45.45/10.53 = 4,316239316
Memory:
=> 266060/245412 = 1,084136065
PHP + JIT
User time (seconds): 16.20
Maximum resident set size (kbytes): 283760
Time:
=> 16.20/10.53 = 1,538461538
Memory:
=> 283760/245412 = 1,156259678
So, as you can see the php implementation is 3.3 times slower, but if you enable JIT, it's only 53 percent slower. I guess it's ok, and the memory didn't increase much. π
Once again I insist on recommending the use of local models (with dlib), and I invite those who want to use it to give a little love to the external model. π¬
Published by matiasdelellis about 1 year ago
To the happiness of many (Issue https://github.com/matiasdelellis/facerecognition/issues/690, https://github.com/matiasdelellis/facerecognition/issues/688, https://github.com/matiasdelellis/facerecognition/issues/687, https://github.com/matiasdelellis/facerecognition/issues/685, https://github.com/matiasdelellis/facerecognition/issues/649, https://github.com/matiasdelellis/facerecognition/issues/632, https://github.com/matiasdelellis/facerecognition/issues/627, https://github.com/matiasdelellis/facerecognition/issues/625, etc..?), Implement the Chinese Whispers Clustering algorithm in native PHP, just means that we do not depend on the pdlib extension, but it goes without saying that its use is still highly recommended.
So, the application can be installed without pdlib or bzip2 installed. But if you want to use models 1, 2, 3, or 4 you still have to rely on these extensions.
Do you insist on not installing these extensions?.
You must configure the external model and select model 5 here, and thus free yourself from these extensions. π¬
Well, You will understand that it is slower, however I must admit that with JIT enabled, it is quite acceptable, and this is the only reason why decided to publish it.
Just I added 2162 Big Bang Theory photos on my test server, resulting in 6059 faces, and I cluster them with both implementations..
Dlib: (Reference)
PHP:
Memory:
=> 266060/245412 = 1,084136065
Time:
=> 45.45/10.53 = 4,316239316
PHP + JIT
User time (seconds): 16.20
Maximum resident set size (kbytes): 283760
Time:
=> 16.20/10.53 = 1,538461538
Memory:
=> 283760/245412 = 1,156259678
So, as you can see the php implementation is 3.3 times slower, but if you enable JIT, it's only 15 percent slower. I guess it's ok, and the memory didn't increase much. π
Once again I insist on recommending the use of local models (with dlib), and I invite those who want to use it to give a little love to the external model. π¬
Published by matiasdelellis over 1 year ago
All notable changes to this project will be documented in this file.
This is a version made with a bit of shame. I'm short on time, and I was hoping to do a little more before release it, but that nextcloud publishes a new version before enabling the previous one, it presupposes this release. π€¦π»ββοΈ π
It's actually well tested in NC26, but I'd like to improve some things soon. Not so in NC27, I hope to hear your reports.. π
Published by matiasdelellis over 1 year ago
About Imaginary, if it is installed correctly it works automatically, however you still have to select the types of files you want to read. So you must add this configuration in config/config.php
+ 'enabledFaceRecognitionMimetype' => array(
+ 0 => 'image/jpeg',
+ 1 => 'image/png',
+ 2 => 'image/heic',
+ 3 => 'image/tiff',
+ 4 => 'image/webp',
+ ),
Finally adds the --crawl-missing
option to face:background_job
that forces a search for the files to analyze the new allowed types. π
Published by matiasdelellis almost 2 years ago
All notable changes to this project will be documented in this file.
Of course a photo is better than a thousand words.
Published by matiasdelellis almost 2 years ago
All notable changes to this project will be documented in this file.
Of course a photo is better than a thousand words.
Published by matiasdelellis almost 2 years ago
Published by matiasdelellis about 2 years ago
Published by matiasdelellis over 2 years ago
Just fix to update NC24... π
Published by matiasdelellis almost 3 years ago
Absolutely all users must configure the new setting for the maximum memory for image processing
occ face:setup --memory 2GB
occ face:setup --memory
doc on readme.Published by matiasdelellis almost 3 years ago
Absolutely all users must configure the new setting for the maximum memory for image processing
occ face:setup --memory 2GB
occ face:setup --memory
doc on readme.Published by matiasdelellis almost 3 years ago
Published by matiasdelellis over 3 years ago
All notable changes to this project will be documented in this file.
Published by matiasdelellis over 3 years ago
Published by matiasdelellis over 3 years ago
Note that this is a version, with few features, is partially made only by contributions from our users. So, agian. Thank you very much for your contributions !!! π π π₯°
All notable changes to this project will be documented in this file.
Published by matiasdelellis over 3 years ago
Note that this is a version, with few features, is partially made only by contributions from our users. So, agian. Thank you very much for your contributions !!! π π π₯°
Published by matiasdelellis almost 4 years ago
You could interpret that I am saying that this version is a monster (frankenstein? π€), but it is certainly a celebration phrase. There was a lot of previous development that allowed to develop an external model so easily.. π π
Then we can discuss privacy, since this model could theoretically be run on Amazon, or google, losing some magic (Do absolutely everything within our personal/secure/private nextcloud instance) of the application. Maybe it is a monster in that sense, but throughout the development of the application I met a lot of people who use the main storage outside their nectcloud instance, already losing part of the grace. On the other hand, there are also other Nextcloud applications that work in a similar way, running external services to relieve the main server. (Ie. Libreoffice online, Talk high performance Backend, etc). And finally there are many users using Nextcloud on small computers like Rasperry Pi. So why prohibit the use of this application to them? Now they can run the process on their personal laptop/computer quickly and safely.. π
On the other hand, this external model allows us to "open the game". (I'm not sure if it's a universally understood phrase.. π€). That is, I decided to trust the dlib models. I think they work very well, but obviously could improve. There are people who would like to use tensorflow, darknet, opencv, etc.? Ok. Now they can implement their own model to improve quality, speed, etc. I would love to see your results. π
So, the benefits far outweigh any concerns, but be responsible. π
Person View | Photos of person | Person Integration | Assign Name |
---|---|---|---|