>
>
> you can get perfect misclassification if you have no signal and
> classifier is susceptible to disbalance (e.g. stats based classifiers
> suchas GNB, LDA etc wouldn't care as much, SVM -- would)...  and e.g.
> you have perfectly balanced dataset and then do leave-one-sample-out.
> So this way you have in training slight disbalance toward one class
> which classifier chooses to be the one to assign to any testing
> data, in the testing a label of the opposite class - perfect
> misclassification
>
> but there were CS papers about what special layout of data points could
> lead to misclassifications. someone would need to search the history of
> the list here ;)
>
> what we see in reality at times (also was reported on the list) is
> some biases toward misclassification.  Some times they get avoided by
> changing partitioning or preprocessing without clearly grasping what
> initially lead to it ;)
>
> fun examples of misclassification are when samples look like XOR or any
tiling of that.
XOX...
OXO...
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Reply via email to