Hi,
Thanks for your suggestion Sebastian.
But fact is that , I am working on SVM in my curriculum and I want to
compare results in terms timing and accuracy of different classification
techniques on Hadoop or Mahout. That is the only reason I want solution for
SVM with Mahout i.e. Mahout-232
Thanks Sebastian.
Although I got the FileDataModel updating correctly after following your
advice, everything seems to point that I will need to use a database to
back my dataModel.
On Mon, Mar 3, 2014 at 3:47 PM, Sebastian Schelter s...@apache.org wrote:
I think it depends on the difference
I think you should rather choose a different library that already offers
an SVM than trying to revive a 4 year old patch.
--sebastian
On 03/04/2014 08:51 AM, Amol Kakade wrote:
Hi,
I am new user of Mahout and want to run sample SVM algorithm with Mahout.
Can you please list me steps to use
Hi
following command:
/usr/lib/hadoop-yarn/bin/yarn jar
mahout-distribution-0.9/mahout-examples-0.9.jar
org.apache.mahout.classifier.df.mapreduce.BuildForest -d
input/data666.noheader.data -ds input/data666.noheader.data.info -sl 5
-p -t 100 -o nsl-forest
When I used hadoop 1.x then it
Mahout 0.9 not supported hadoop 2 dependencies.
You can use mahout-1.0-SNAPSHOT or add to your mahout patch from
https://issues.apache.org/jira/browse/MAHOUT-1329 for added hadoop 2
support.
On Tue, Mar 4, 2014 at 3:38 PM, Margusja mar...@roo.ee wrote:
Hi
following command:
Hi,
I'm trying to apply a PCA to reduce the dimension of a matrix of 1603
columns and 100.000 to 30.000.000 lines using ssvd with the pca option, and
I always get a StackOverflowError :
Here is my command line :
mahout ssvd -i /user/myUser/Echant100k -o /user/myUser/Echant/SVD100 -k 100
-pca
Sory, I didn't see that you try use mahout-1.0-snapshot.
You used /usr/lib/hadoop-yarn/bin/yarn but need use
/usr/lib/hadoop/bin/hadoop and then your example will be success.
On Tue, Mar 4, 2014 at 3:45 PM, Sergey Svinarchuk
ssvinarc...@hortonworks.com wrote:
Mahout 0.9 not supported hadoop 2
Hi thanks for reply.
Here is my output:
[hduser@vm38 ~]$ /usr/lib/hadoop/bin/hadoop version Hadoop 2.2.0.2.0.6.0-101
Subversion g...@github.com:hortonworks/hadoop.git -r
b07b2906c36defd389c8b5bd22bebc1bead8115b
Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
From source
I’d suggest a command line option if you want to submit a patch. Most people
will want that line executed so the default should be the current behavior. But
a large minority will want it your way.
And please do submit a patch with the Jira, it will make your life easier when
new releases come
Kevin, thanks for reporting this.
Stack overflow error has not been known to happen to date. But i will take
a look. It looks like a bug in the mean computation code, given your stack
trace, although it may have been induced by some circumstances specific to
your deployment.
What version is it?
It doesn't look like -us has been removed. At least i see it on the head of
the trunk, SSVDCli.java, line 62:
addOption(uSigma, us, Compute U * Sigma, String.valueOf(false));
i.e. short version(single dash) -us true, or long version(double-dash)
--uSigma true. Can you check again with 0.9?
as for the stack trace, it looks like it doesn't agree with current trunk.
Again, i need to know which version you are running.
But from looking at current trunk, i don't really see how that may be
happening at the moment.
On Tue, Mar 4, 2014 at 9:40 AM, Dmitriy Lyubimov dlie...@gmail.com
I have not seen the stackoverflow error, but this code has been fixed since .8
Sent from my iPhone
On Mar 4, 2014, at 12:40 PM, Dmitriy Lyubimov dlie...@gmail.com wrote:
It doesn't look like -us has been removed. At least i see it on the head of
the trunk, SSVDCli.java, line 62:
The -us option was fixed for Mahout 0.8, seems like u r using Mahout 0.7 which
had this issue (from ur stacktrace, its apparent u r using Mahout 0.7). Please
upgrade to the latest mahout version.
On Tuesday, March 4, 2014 8:54 AM, Kevin Moulart kevinmoul...@gmail.com wrote:
Hi,
I'm
I think we should introduce a new parameter for the recommend() method
in the Recommender interface that tells whether already known items
should be recommended or not.
What do you think?
Best,
Sebastian
On 03/04/2014 05:32 PM, Pat Ferrel wrote:
I’d suggest a command line option if you want
Sent from my iPhone
On Mar 4, 2014, at 22:13, Sebastian Schelter s...@apache.org wrote:
I think we should introduce a new parameter for the recommend() method
in the Recommender interface that tells whether already known items
should be recommended or not.
+1 for that
What do you think?
I think we should introduce a new parameter for the recommend() method in
the Recommender interface that tells whether already known items should be
recommended or not.
I agree (if the parameter is missing then defaults to current behavior as
Pat suggested)
On 03/04/2014 05:32 PM, Pat
Margusja,
From trunk, can you build mahout using the following command and try again:
mvn clean package -DskipTests=true -Dhadoop2.version=2.2.0
Best
Gokhan
On Tue, Mar 4, 2014 at 4:25 PM, Margusja mar...@roo.ee wrote:
Hi thanks for reply.
Here is my output:
[hduser@vm38 ~]$
I have created a Jira issue already.
I only use the non-hadoop part of Mahout recommender algorithms.
May be I can create a patch for that part. However, I have not done it
before, and don't know how to proceed.
On Wed, Mar 5, 2014 at 1:01 AM, Sebastian Schelter s...@apache.org wrote:
Would
That's fine, I was talking about the non-distributed part only.
This page has instructions on how to create patches:
https://mahout.apache.org/developers/how-to-contribute.html
Let me know if you need more infos!
Best,
Sebastian
On 03/05/2014 12:27 AM, Mario Levitin wrote:
I have created a
20 matches
Mail list logo