Github user dbtsai closed the pull request at:
https://github.com/apache/spark/pull/1518
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1518#issuecomment-77406914
I'm looking at really old PRs -- this is obsolete now, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/1518#discussion_r22173571
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/Regularizer.scala ---
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foun
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1518#discussion_r22171070
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/Regularizer.scala ---
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Found
Github user dbtsai commented on the pull request:
https://github.com/apache/spark/pull/1518#issuecomment-51151346
It's too late to get into 1.1, but I'll try to make it happen in 1.2. We'll
use this at Alpine implementation first.
---
If your project is set up for it, you can reply t
Github user MLnick commented on the pull request:
https://github.com/apache/spark/pull/1518#issuecomment-51151194
This looks promising. FWIW, I support decoupling regularization from the
raw gradient update and believe it is a good way to go - it will allow various
update/learning rat
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1518#issuecomment-50691925
I think this is the approach LIBLINEAR uses. Yes, let's discuss tomorrow.
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
Github user dbtsai commented on the pull request:
https://github.com/apache/spark/pull/1518#issuecomment-50663418
I tried to make the bias really big to make the intercept smaller to avoid
being regularized. The result is still quite different from R, and very
sensitive to the strengt
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1518#issuecomment-50441485
@dbtsai I thought another way to do this and want to know your opinion. We
can add an optional argument to `appendBias`: `appendBias(bias: Double = 1.0)`.
If this is used
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1518#issuecomment-49670856
QA results for PR 1518:- This patch FAILED unit tests.- This patch
merges cleanly- This patch adds the following public classes
(experimental):abstract class Regularizer
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1518#issuecomment-49670761
QA tests have started for PR 1518. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16928/consoleFull
---
If
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/1518
[SPARK-2505][MLlib] Weighted Regularizer for Generalized Linear Model
(Note: This is not ready to be merged. Need documentation, and make sure
it's backforwad compatible with Spark 1.0 apis).
12 matches
Mail list logo