https://bugs.kde.org/show_bug.cgi?id=392518

--- Comment #7 from caulier.gil...@gmail.com ---
Other problème : the data stored in database : there can be completely
different...

So a question to erase all DB contents or to make a backup must be ask when
changing algorithm.

Else, the ultimate solution, to prevent a mess in DB due to many settings
changes, is to only retain one algorithm : the best (in less false positive,
and in speed)

In summer 2017, the student working on deep learning said to me by private mail
that deep learning is very promising, and better than other one implemented.
This need to be confirmed of course with a collection of face images to process
in unit tests (this part is missing).

I'm not also sure that DNN algorithm backported in digiKam core to process face
deep learning is the best way. After all this algorithm come from a separate
library written in C/C++ that we can add as external dependency :

https://github.com/davisking/dlib


Even if i'm not agree to add external dependency again and again in digiKam,
which will increase back the puzzle that i'm trying to reduce, Dlib can be
interesting for other cases. For ex, i'm not very impressed by the quality of
OpenCV library, which is a monster : look the possible configuration cases and
you will understand what i mean. This library wan to do all, but this cannot be
done in the best conditions.

To resume : if we can port all OpenCV based code to DLib, or another solution,
lets go...

Note : i'm not favorable to TensorFlow dependency as it's a Python stuff, which
is so far less speed than C/C++. I don't understand why this kind of
"algorithms" are written in Python. We needs performance here, and scripting is
not the best solution so far...

https://github.com/tensorflow/tensorflow

VoilĂ , my viewpoints for the moment about face recognition...

Gilles Caulier

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to