Hi!

I've been performing some tests with KFold cross validation and
encountered a strange behavior:

>>> from sklearn import cross_validation
>>> list(cross_validation.KFold(14, 5, indices=True, shuffle=True,
random_state=32))
[(array([13,  2, 12,  9,  1, 10,  4,  3,  8,  6,  5, 11]), array([0,
7])), (array([ 0, 13, 12,  9,  1, 10,  4,  3,  8,  6,  5,  7]),
array([ 2, 11])), (array([ 0,  2, 12,  9,  1, 10,  4,  3,  6,  5, 11,
7]), array([13,  8])), (array([ 0, 13,  2, 12,  1, 10,  4,  3,  8,  5,
11,  7]), array([9, 6])), (array([ 0, 13,  2,  9,  8,  6, 11,  7]),
array([12,  1, 10,  4,  3,  5]))]

Since I was performing a 5-fold cross validation on 14 examples, I would
have expected the first 4 folds to have 11 training indices and 3
testing indices and the last fold to have 12 training indices and 2
testing indices.

Does anyone have any explanation for this? Is this an expected behavior?

Thanks and best regards,
Tadej


------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to