On Fri, Feb 17, 2012 at 3:53 PM, Andreas wrote:
> ASSET: Approximate Stochastic Subgradient
> Estimation Training for Support Vector Machines Sangkyun Lee and Stephen
> J. Wright
> Code available online but didn't try yet.
> I think I gave this reference before when we were talking about GSoC.
S
On 02/17/2012 01:35 PM, Olivier Grisel wrote:
> 2012/2/17 Andreas:
>
>>
>>> With regards to LaSVM, I would rather pitch a summer of code project as
>>> having a good on-line SVM solver, that would incorporate the core ideas
>>> from LaSVM, but that would also be useable in a real out-of-c
On 02/15/2012 02:04 AM, Ian Goodfellow wrote:
> Further update: I talked to Adam Coates and his code doesn't implement
> a standard SVM. Instead it's an "L2 SVM" which squares all the slack
> variables. So this probably explains the difference in performance I
> observed prior to building this test
2012/2/17 Andreas :
>
>> With regards to LaSVM, I would rather pitch a summer of code project as
>> having a good on-line SVM solver, that would incorporate the core ideas
>> from LaSVM, but that would also be useable in a real out-of-core setting.
>> I believe that doing this right, including worr
> With regards to LaSVM, I would rather pitch a summer of code project as
> having a good on-line SVM solver, that would incorporate the core ideas
> from LaSVM, but that would also be useable in a real out-of-core setting.
> I believe that doing this right, including worrying about the parameter
To pitch in in this conversation, I do agree with Olivier that
reimplement libsvm or liblinear should not be our priorities, as we have
them, and they work reasonnably well. I agree with David that we would
need to garner more credibility (and I add more man power) to sustain
such an effort.
I am
2012/2/16 Mathieu Blondel :
> On Thu, Feb 16, 2012 at 7:50 PM, Olivier Grisel
> wrote:
>
>> Agreed but personally I would rather see such a GSoC student spend
>> time on writing a cython version of the LaSVM algorithm which looks
>> much more scalable than libsvm:
>
> I definitely want LaSVM in sc
On Thu, Feb 16, 2012 at 7:50 PM, Olivier Grisel
wrote:
> Agreed but personally I would rather see such a GSoC student spend
> time on writing a cython version of the LaSVM algorithm which looks
> much more scalable than libsvm:
I definitely want LaSVM in scikit-learn but IMO it's too small of a
2012/2/16 Mathieu Blondel :
> On Thu, Feb 16, 2012 at 3:43 PM, David Warde-Farley
> wrote:
>
>> I would stress that such an implementation would have to be extremely well
>> tested and checked against libsvm/liblinear in all of the relevant cases
>> (obviously, can't check against a float32 case
On Thu, Feb 16, 2012 at 3:43 PM, David Warde-Farley
wrote:
> I would stress that such an implementation would have to be extremely well
> tested and checked against libsvm/liblinear in all of the relevant cases
> (obviously, can't check against a float32 case if liblinear doesn't support
> flo
On 2012-02-16, at 2:03 AM, Mathieu Blondel wrote:
>> Well, it's a tradeoff: a good reimplementation that would approach the
>> original in terms of performance is a lot of work. For it to be
>> sustainable, the team would have to grow a fair amount.
>
> It is a lot of work but the bindings have
> Well, it's a tradeoff: a good reimplementation that would approach the
> original in terms of performance is a lot of work. For it to be
> sustainable, the team would have to grow a fair amount.
It is a lot of work but the bindings have caused us lots of problems
so far (memory leaks, sign switc
> I say this as someone who probably won't be responsible for doing the
> work, so feel free to ignore...
> Shouldn't reimplementation be a long term goal for such dependencies?
> This would make it the sklearn way, allowing easier/better changes in
> the future.
Well, it's a tradeoff:
On 16 February 2012 03:24, Olivier Grisel wrote:
> 2012/2/15 Ian Goodfellow :
> > Indeed, in Coates' code the bias term is not penalized.
> > Is there any way to turn off the bias penalty in liblinear?
>
> Nope. It as been debate before and apparently upstream finds intercept
> regularization a r
2012/2/15 Ian Goodfellow :
> Indeed, in Coates' code the bias term is not penalized.
> Is there any way to turn off the bias penalty in liblinear?
Nope. It as been debate before and apparently upstream finds intercept
regularization a reasonable thing to do :) Forking liblinear in
scikit-learn to
Indeed, in Coates' code the bias term is not penalized.
Is there any way to turn off the bias penalty in liblinear?
On Wed, Feb 15, 2012 at 3:57 AM, Paolo Losi wrote:
>
>
> On Wed, Feb 15, 2012 at 8:26 AM, Olivier Grisel
> wrote:
>>
>> 2012/2/15 Ian Goodfellow :
>> > Further update: I talked to
On Wed, Feb 15, 2012 at 8:26 AM, Olivier Grisel wrote:
> 2012/2/15 Ian Goodfellow :
> > Further update: I talked to Adam Coates and his code doesn't implement
> > a standard SVM. Instead it's an "L2 SVM" which squares all the slack
> > variables. So this probably explains the difference in perform
2012/2/15 Ian Goodfellow :
> Further update: I talked to Adam Coates and his code doesn't implement
> a standard SVM. Instead it's an "L2 SVM" which squares all the slack
> variables. So this probably explains the difference in performance I
> observed prior to building this test case.
Good to kno
Further update: I talked to Adam Coates and his code doesn't implement
a standard SVM. Instead it's an "L2 SVM" which squares all the slack
variables. So this probably explains the difference in performance I
observed prior to building this test case.
On Tue, Feb 14, 2012 at 7:31 PM, David Warde-F
On Tue, Feb 14, 2012 at 06:03:44PM -0500, Ian Goodfellow wrote:
> I've observed that SVMs fit with sklearn consistently get around 5
> percentage points lower accuracy than equivalent SVMs fit with Adam
> Coates' SVM implementation based on minFunc. Am I overlooking some
> basic usage issue (eg too
On 02/15/2012 12:03 AM, Ian Goodfellow wrote:
> I've observed that SVMs fit with sklearn consistently get around 5
> percentage points lower accuracy than equivalent SVMs fit with Adam
> Coates' SVM implementation based on minFunc. Am I overlooking some
> basic usage issue (eg too loose of a defaul
I've observed that SVMs fit with sklearn consistently get around 5
percentage points lower accuracy than equivalent SVMs fit with Adam
Coates' SVM implementation based on minFunc. Am I overlooking some
basic usage issue (eg too loose of a default convergence criterion),
or is this likely to be a de
22 matches
Mail list logo