> Well, it's a tradeoff: a good reimplementation that would approach the
> original in terms of performance is a lot of work. For it to be
> sustainable, the team would have to grow a fair amount.
It is a lot of work but the bindings have caused us lots of problems
so far (memory leaks, sign switc
> I say this as someone who probably won't be responsible for doing the
> work, so feel free to ignore...
> Shouldn't reimplementation be a long term goal for such dependencies?
> This would make it the sklearn way, allowing easier/better changes in
> the future.
Well, it's a tradeoff:
On 16 February 2012 03:24, Olivier Grisel wrote:
> 2012/2/15 Ian Goodfellow :
> > Indeed, in Coates' code the bias term is not penalized.
> > Is there any way to turn off the bias penalty in liblinear?
>
> Nope. It as been debate before and apparently upstream finds intercept
> regularization a r
2012/2/15 Ian Goodfellow :
> Indeed, in Coates' code the bias term is not penalized.
> Is there any way to turn off the bias penalty in liblinear?
Nope. It as been debate before and apparently upstream finds intercept
regularization a reasonable thing to do :) Forking liblinear in
scikit-learn to
Indeed, in Coates' code the bias term is not penalized.
Is there any way to turn off the bias penalty in liblinear?
On Wed, Feb 15, 2012 at 3:57 AM, Paolo Losi wrote:
>
>
> On Wed, Feb 15, 2012 at 8:26 AM, Olivier Grisel
> wrote:
>>
>> 2012/2/15 Ian Goodfellow :
>> > Further update: I talked to
>> sklearn.svm.enable_libsvm_stdout(True)
+1 too
A
--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing
also focuses on allowing computing to be
On 2012-02-15, at 3:16 AM, Olivier Grisel wrote:
> sklearn.svm.enable_libsvm_stdout(True)
>
> WDYT?
I'm +1 for what it's worth.
David
--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes u
On 2012-02-15, at 2:20 AM, Mathieu Blondel wrote:
> git bisect tells me that the regression was introduced in:
> https://github.com/scikit-learn/scikit-learn/commit/658897497399147a78fad5f7001fc62dd1e487ed
Wow, that was quick. Thanks, Mathieu!
-
On Wed, Feb 15, 2012 at 8:26 AM, Olivier Grisel wrote:
> 2012/2/15 Ian Goodfellow :
> > Further update: I talked to Adam Coates and his code doesn't implement
> > a standard SVM. Instead it's an "L2 SVM" which squares all the slack
> > variables. So this probably explains the difference in perform
2012/2/15 Alexandre Gramfort :
>> Fabian: any idea on how to do that?
>
> use :
>
>
> libsvm.set_verbosity_wrap(1)
> libsvm_sparse.set_verbosity_wrap(1)
Thanks.
> in svm/base.py
>
> maybe we could add a verbose flag to SVM estimators.
Unfortunately we cannot make it a per-estimator parameter as
> Fabian: any idea on how to do that?
use :
libsvm.set_verbosity_wrap(1)
libsvm_sparse.set_verbosity_wrap(1)
in svm/base.py
maybe we could add a verbose flag to SVM estimators.
Alex
--
Virtualization & Cloud Manageme
11 matches
Mail list logo