Hi Gael,

On Sun, Apr 29, 2012 at 10:28 PM, Gael Varoquaux <
[email protected]> wrote:


>  It turns out that, for l2 penalizations, theory tells us that for
>  prediction consistency (i.e. that under given hypothesis, the estimator
>  learned predicts as well as an model knowing the true distribution) the
>  penalty parameter should be kept constant as the number of samples
>  grow.


Have you got any pointer to this theory?

The tricky part in your reasoning IMHO is that I suspect that keeping
penalty
parameter costant is only justified when assuming gaussian priors on weights
really holds. That is something that cannot be generalized.
So, while it can be a reasonable default, considering C scaling a "bug"
doesn't really convince me.

Ciao!

Paolo
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to