One correction here i used 2.4.1 version instead of 1.2.1 which should be the
case here.
thanks,Raghuveer
On Wednesday, April 8, 2015 11:32 AM, Raghuveer
wrote:
I downloaded the trunk.tar.gz for mahout and have hadoop 2.4.1 installed, did
a build with command
mvn -Dhadoop.versio
I downloaded the trunk.tar.gz for mahout and have hadoop 2.4.1 installed, did a
build with command
mvn -Dhadoop.version=1.2.1 clean compile
mvn -Dhadoop.version=1.2.1 -DskipTests=true clean package
mvn -Dhadoop.version=1.2.1 clean install -DskipTests=true
Now at the command prompt i type
> mah
Infact if i just type mahout i get the following error
raghuveer@csstpdfc561:~/trunk$ mahout
MAHOUT_LOCAL is set, so we don't add HADOOP_CONF_DIR to classpath.
MAHOUT_LOCAL is set, running locally
Error occurred during initialization of VM
Could not reserve enough space for 3145728KB object heap
Could u post the original issue more clearly formatted? its hard to
discern from your earlier post as to what is wrong
seems like an installation issue on ur end.
On Wed, Apr 8, 2015 at 1:37 AM, Raghuveer wrote:
> Same no change except
>
> Error: Could not find or load main class ..bin.mahout
Same no change except
Error: Could not find or load main class ..bin.mahout
On Wednesday, April 8, 2015 10:55 AM, Suneel Marthi
wrote:
From $MAHOUT_HOME try running ./bin/mahout
and see if that works.
On Wed, Apr 8, 2015 at 1:22 AM, Raghuveer wrote:
I am learning mahout
>From $MAHOUT_HOME try running ./bin/mahout
and see if that works.
On Wed, Apr 8, 2015 at 1:22 AM, Raghuveer
wrote:
>
> I am learning mahout usage and as suggested here am trying to run my
> sample but i get the below error, kindly suggestError: Could not find or
> load main class ..mahout
I am learning mahout usage and as suggested here am trying to run my sample
but i get the below error, kindly suggestError: Could not find or load main
class ..mahout
Note: I have set MAHOUT_HOME to trunk and $PATH has $MAHOUT_HOME/bin in
~/.bashrc.Also am unable to run mahout from the ins
We are working on a release, which will be 0.10.0 so give it a try if you can.
It fixes one problem that you may encounter with an out of range index in a
vector. You may not see it.
1) The search engine must be able to take one query with multiple fields and
apply each field in the query to se
Thanks, Pat.
We are only running EMR cluster with 1 master and 1 core node right now and
were using EMR AMI 3.2.3 which has Hadoop 2.4.0. We are using default
configuration for spark (using aws script for spark) which I believe sets
number of instances to 2. Spark version 1.1.0h
(https://gi