Hello,

Since I am currently using NMF for my robotics research, I have an 
implementation of some algorithms from [1]_ that could extend 
scikit-learn current implementation.

More precisely the current implementation covers the Frobenius norm case 
(for measuring the reconstruction error), and the sparsity enforcement 
method introduced on [2]_. [1]_ describes algorithms that generalize to 
all the beta-divergences (Frobenius norm corresponds to beta = 2), and 
adaptation of some of these algorithms to L_1 regularization (amongst 
other things).

My implementation covers algorithms for beta-divergence minimization 
based on three kind of approaches presented in [1]_:
* gradient descent
* maximization-minimization (that leads to multiplicative updates)
* heuristic update (generalizes to all beta, multiplicative updates 
commonly used for NMF with Frobenius (beta = 2) norm, Kullback-Leibler 
(beta = 1) and Itakura-Saito (beta = 0) divergences)

I was planning to integrate my code into a BetaNMF class in 
'sklearn/decomposition/nmf.py' as an alternative to the existing 
ProjectedGradientNMF, and to enable access to the various algorithms 
through arguments to the fit and transform method (I think default 
should be heuristic multiplicative updates).

Is this duplicated work ? Is it the right way to do it ?

Thanks,

Olivier Mangin


.. [1] Févotte, C., & Idier, J. (2011). Algorithms for nonnegative 
matrix factorization with the β-divergence. Neural Computation, 29(9), 
2421-2456. doi:http://dx.doi.org/10.1162/NECO_a_00168
.. [2] Hoyer, P. O. (2004). Non-negative Matrix Factorization with 
Sparseness Constraints. Journal of Machine Learning Research, 5, 1457-1469.

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to