Hi Gang,
No worries!
I agree LBFGS would converge faster and your test suite is more comprehensive.
I'd like to merge my branch with yours.
I also agree with your viewpoint on the redundancy issue. For different GLMs,
usually they only differ in gradient calculation but the regression.sca
Hey,
We're actually working on similar ideas in the AMPlab with spark - for example
we've got some image classification pipelines built on this idea -
http://www.eecs.berkeley.edu/~brecht/papers/07.rah.rec.nips.pdf
Approximating kernel methods via random projections hit with nonlinearity.
Add
Thanks Tom for the pointers...
I have a IPM running on the JVM which uses SOCP formulation for the
quadratic program I wrote above
We are going to show the details of it at the SummitIPM runtimes and
accuracy give a baseline for the problem that we are solving...
Now we are trying to see how
What is your general solver? IPM or simplex or something else? I have
seen a lot of attempts to apply iterative solvers for the subproblems on
those without much luck because the conditioning of the linear systems gets
worse and worse near the optimum. IPOPT (interior point method) has an
LBFGS
Hi,
I am coming up with an iterative solver for Equality and bound constrained
quadratic minimization...
I have the cholesky versions running but cholesky does not scale for large
dimensions but works fine for matrix factorization use-cases where ranks
are low..
Minimize 0.5x'Px + q'x
s.t Aeq x
What flavor of SVM are you trying to support? LSSVM doesn't need a bound
constraint, but most other formulations do. There have been ideas for
bound-constrained CG, though bounded LBFGS is more common. I think code
for Nystrom approximations or kernel mappings would be more useful.
On Fri, Jun
Hi Deb,
Putting your code on github will be much appreciated -- it will give us a
good starting point to adapt for our purposes.
Regards.
On Sat, Jun 28, 2014 at 10:57 AM, Debasish Das [via Apache Spark Developers
List] wrote:
> Factorization problems are non-convex and so both ALS and DSGD w
Factorization problems are non-convex and so both ALS and DSGD will
converge to local minima and it is not clear which minima will be better
than the other until we run both the algorithms and see...
So I will still say get a DSGD version running in the test setup while you
experiment with the Spa
Hi Deb,
Thanks so much for your response! At this point, we haven't determined
which of DSGD/ALS to go with and were waiting on guidance like yours to
tell us what the right option would be. It looks like ALS seems to be good
enough for our purposes.
Regards.
On Fri, Jun 27, 2014 at 12:47 PM, D