Hi,
I am a first year Computer Science graduate student at University of Florida
working on implementing KNN in Madlib. I am ready with a first version of it
but I don't know how to proceed with testing and adding it to Madlib platform.
Also, I am not clear on what standards do I have to choose
Github user haying commented on the issue:
https://github.com/apache/incubator-madlib/pull/75
I suggest to document these in user doc.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fea
Github user mktal commented on the issue:
https://github.com/apache/incubator-madlib/pull/75
- You definitely need to be careful with the `step size`, or learning rate,
to make sure that it is either small enough or decayed properly, like other
variants of sgd. For example, we can dec
Github user haying commented on the issue:
https://github.com/apache/incubator-madlib/pull/75
Hi Xiaocheng,
Thanks for the explanation. It is good to know that it usually converges
without much tuning. I would definitely try it out.
---
If your project is set up for it, you can r
Github user mktal commented on the issue:
https://github.com/apache/incubator-madlib/pull/75
This is a good point Aaron. In terms of convergence behavior, it has both
benefit of mini-batch that iterates fast and large batch size that reduce the
variance of the empirical objective. To
Github user haying commented on the issue:
https://github.com/apache/incubator-madlib/pull/75
I am not so sure about the technical details about convergence behavior of
this algorithm. But it seems to me that it could be beneficial to expose the
gradient so that users can do convergen
Github user mktal commented on the issue:
https://github.com/apache/incubator-madlib/pull/75
The SQL code used to obtain the above results (note that we assume two
tables mnist_train and mnist_test already exist in the database):
```sql
-- training on mnist_train and store
Github user mktal commented on a diff in the pull request:
https://github.com/apache/incubator-madlib/pull/75#discussion_r87922862
--- Diff: src/modules/convex/algo/igd.hpp ---
@@ -86,6 +101,32 @@ IGD::merge(state_type &state,
template
void
+IGD::mergeInPlace(st
Github user haying commented on a diff in the pull request:
https://github.com/apache/incubator-madlib/pull/75#discussion_r87922091
--- Diff: src/modules/convex/algo/igd.hpp ---
@@ -86,6 +101,32 @@ IGD::merge(state_type &state,
template
void
+IGD::mergeInPlace(s
Github user mktal commented on a diff in the pull request:
https://github.com/apache/incubator-madlib/pull/75#discussion_r87918714
--- Diff: src/modules/convex/algo/igd.hpp ---
@@ -86,6 +101,32 @@ IGD::merge(state_type &state,
template
void
+IGD::mergeInPlace(st
Github user haying commented on a diff in the pull request:
https://github.com/apache/incubator-madlib/pull/75#discussion_r87916186
--- Diff: src/modules/convex/algo/igd.hpp ---
@@ -86,6 +101,32 @@ IGD::merge(state_type &state,
template
void
+IGD::mergeInPlace(s
GitHub user mktal opened a pull request:
https://github.com/apache/incubator-madlib/pull/75
SVM: Implement c++ functions for training multi-class svm in mini-batch
This PR implements multi-class support vector machine and a new training
mechanism using buffered mini-batch stochastic
12 matches
Mail list logo