[
https://issues.apache.org/jira/browse/MAHOUT-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12832089#action_12832089
]
Ted Dunning commented on MAHOUT-227:
Zhao,
My thought is that having a good
[
https://issues.apache.org/jira/browse/MAHOUT-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831648#action_12831648
]
Ted Dunning commented on MAHOUT-227:
Is this going to be complete this week or next?
[
https://issues.apache.org/jira/browse/MAHOUT-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831837#action_12831837
]
zhao zhendong commented on MAHOUT-227:
--
So far, I didn't work on this parallel Binary
[
https://issues.apache.org/jira/browse/MAHOUT-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12793111#action_12793111
]
zhao zhendong commented on MAHOUT-227:
--
Thanks for your comments.
Sure, actually, I
[
https://issues.apache.org/jira/browse/MAHOUT-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12793487#action_12793487
]
Ted Dunning commented on MAHOUT-227:
{quote}
I understand this concern. Actually, if we
Thanks.
On Tue, Dec 22, 2009 at 11:40 AM, Ted Dunning (JIRA) j...@apache.orgwrote:
[
https://issues.apache.org/jira/browse/MAHOUT-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12793487#action_12793487]
Ted Dunning commented on MAHOUT-227:
{quote}
k = 1
Otherwise as in the Pegasos article. No parallelism.
{quote}
I confused. As the consequence, what is the motivation behind integrating
the Pegasos into Mahout.
Can you estimate that in which situation, this implementation can outperform
the original Pegasos? Large-scale data set
Zhao,
Mahout is not just for hadoop-based implementations. We are interested in
scalable
machine learning - we currently have *no* SVM implementations in Mahout,
and would
welcome an easy simple straightforward SVM, and would find something like
the original
Pegasos implemented in our APIs
Hi,
I see. Thanks for your explanation. I thought that every thing in Mahout
should be parallelized.
I agree with Ted, to extend k may not obtain any improvement, especially,
within the large cluster case. *The lager-scale learning, however, at least
has two levels, one is for algorithm and
[
https://issues.apache.org/jira/browse/MAHOUT-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12793053#action_12793053
]
David Hall commented on MAHOUT-227:
---
As Ted hints, a proposal should really be placed on
[
https://issues.apache.org/jira/browse/MAHOUT-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12793088#action_12793088
]
Ted Dunning commented on MAHOUT-227:
Here are a few formatting suggestions:
a) when
11 matches
Mail list logo