The results I get from DPGMM are not what I expect. E.g.:
>>> import sklearn.mixture
>>> sklearn.__version__
'0.12-git'
>>> data = [[1.1],[0.9],[1.0],[1.2],[1.0], [6.0],[6.1],[6.1]]
>>> m = sklearn.mixture.DPGMM(n_components=5, n_iter=1000, alpha=1)
>>> m.fit(data)
DPGMM(alpha=1, covariance_type='diag', init_params='wmc', min_covar=None,
n_components=5, n_iter=1000, params='wmc',
random_state=<mtrand.RandomState object at 0x108a3f168>, thresh=0.01,
verbose=False)
>>> m.converged_
True
>>> m.weights_
array([ 0.2, 0.2, 0.2, 0.2, 0.2])
>>> m.means_
array([[ 0.62019109],
[ 1.16867356],
[ 0.55713292],
[ 0.36860511],
[ 0.17886128]])
I expected the result to be more similar to the vanilla GMM; that is, two
gaussians (around values 1 and 6), with non-uniform weights (like [ 0.625,
0.375]). I expected the "unused" gaussians to have weights near zero.
Am I using the model incorrectly?
I've also tried changing alpha without any luck.
I've also tried a different data in a smaller range with no luck: [[0.1],
[0.2], [0.15], [0.112], [0.13], [0.8], [0.85], [0.79]]
Thanks,
Aron
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general