Hi,
$ bin/hadoop jar \
-input\
-Dstream.shipped.hadoopstreaming=
Should work.
Check $ bin/hadoop jar hadoop-0.18.3-streaming.jar -info for more details.
Amogh
On 6/2/10 10:15 PM, "Mo Zhou" wrote:
Thank you Amogh.
I tried so but it through exceptions as follows:
$ bin/hadoop jar had
Thank you Amogh.
I tried so but it through exceptions as follows:
$ bin/hadoop jar hadoop-0.18.3-streaming.jar \
> -D stream.shipped.hadoopstreaming=fasta.jar\
> -input HumanSeqs.4\
> -output output\
> -mapper "cat -"\
> -inputreader
> "org.apache.hadoop.streaming.StreamFast
Hi,
You might need to add
-Dstream.shipped.hadoopstreaming=
Amogh
On 6/2/10 5:10 PM, "Mo Zhou" wrote:
Thank you Amogh. Elastic mapreduce use 0.18.3.
I tried the first way by download hadoop-0.18.3 to my local machine.
Then I got following warning.
WARN mapred.JobClient: No job jar file set.
Thank you Amogh. Elastic mapreduce use 0.18.3.
I tried the first way by download hadoop-0.18.3 to my local machine.
Then I got following warning.
WARN mapred.JobClient: No job jar file set. User classes may not be
found. See JobConf(Class) or JobConf#setJar(String).
So the results were incorrec
Hi,
Depending on what hadoop version ( 0.18.3??? ) EC2 uses, you can try one of the
following
1. Compile the streaming jar files with your own custom classes and run on ec2
using this custom jar ( should work for 18.3 . Make sure you pick compatible
streaming classes )
2. Jar up your classes a
Hi,
I know it may not be suitable to be posted here since it relates to
EC2 more than Hadoop. However I could not find a solution and hope
some one here could kindly help me out. Here is my question.
I created my own inputreader and outputformatter to split an input
file while use hadoop streamin