Hi,
I would try this:
export CLASSPATH=$(hadoop classpath)
Brock
On Mon, Apr 30, 2012 at 10:15 AM, Ryan Cole r...@rycole.com wrote:
Hello,
I'm trying to run an application, written in C++, that uses libhdfs. I have
compiled the code and get an error when I attempt to run the application.
The Apache MRUnit team is pleased to announce the release of MRUnit
0.8.0-incubating from the Apache Incubator.
This is the second release of Apache MRUnit, a Java library that helps
developers unit test Apache Hadoop map reduce jobs.
The release is available here:
Hi,
tl;dr DUMMY should not be static.
On Tue, Jan 17, 2012 at 3:21 PM, Stan Rosenberg
srosenb...@proclivitysystems.com wrote:
class MyKeyT implements WritableComparableT {
private String ip; // first part of the key
private final static Text DUMMY = new Text();
...
public void
Hi,
Since your using CDH2, I am moving this to CDH-USER. You can subscribe here:
http://groups.google.com/a/cloudera.org/group/cdh-user
BCC'd common-user
On Sat, Dec 17, 2011 at 2:01 AM, Meng Mao meng...@gmail.com wrote:
Maybe this is a bad sign -- the edits.new was created before the master
Hi,
ArrayWritable is a touch hard to use. Say you have an array of
IntWritable[]. The get() method or ArrayWritable, after
serializations/deserialization, does in fact return an array of type
Writable. As such you cannot cast it directly to IntWritable[]. Individual
elements are of type
Does you job end with an error?
I am guessing what you want is:
-mapper bowtiestreaming.sh -file '/root/bowtiestreaming.sh'
First option says use your script as a mapper and second says ship
your script as part of the job.
Brock
On Tue, Dec 6, 2011 at 4:59 PM, Romeo Kienzler ro...@ormium.de
Hi,
This specific issue is probably more appropriate on the CDH-USER list.
(BCC common-user) It looks like the JRE detection mechanism recently
added to BIGTOP would have this same issue:
https://issues.apache.org/jira/browse/BIGTOP-25
To resolve the immediate issue I would set an environment
Hi,
Depending on the response you get here, you might also post the
question separately on avro-user.
On Sat, Nov 26, 2011 at 1:46 PM, Leonardo Urbina lurb...@mit.edu wrote:
Hey everyone,
First time posting to the list. I'm currently writing a hadoop job that
will run daily and whose output
Hi,
On Mon, Oct 31, 2011 at 12:59 AM, Ronen Itkin ro...@taykey.com wrote:
For instance, yesterday's daily log:
/var/log/hadoop/hadoop-hadoop-datanode-ip-10-10-10-4.log
on the problematic Node03 was in the size of 1.1 GB while on other Nodes
the same log was in the size of 87 MB.
Again,
Hi,
On Sun, Oct 23, 2011 at 10:40 AM, Varun Thacker
varunthacker1...@gmail.com wrote:
I am having trouble using KeyValueInputFormat as a Input format. I used both
hadoop 0.20.1 and 0.21.0 and get a error while using it. This seems to be
because of this issue -
Hi,
Inline..
On Sun, Oct 16, 2011 at 9:40 PM, Keith Thompson kthom...@binghamton.eduwrote:
Thanks. I went back and changed to WritableComparable instead of just
Comparable. So, I added the readFields and write methods. I also took
care of the typo in the constructor. :P
Now I am
Hi,
Discussion, below.
On Sat, Oct 15, 2011 at 4:26 PM, Keith Thompson kthom...@binghamton.eduwrote:
Hello,
I am trying to write my very first MapReduce code. When I try to run the
jar, I get this error:
11/10/15 17:17:30 INFO mapred.JobClient: Task Id :
Hi,
On Wed, Oct 5, 2011 at 7:13 PM, Jignesh Patel jign...@websoft.com wrote:
I also found another problem if I directly export from eclipse as a jar
file then while trying javac -jar or hadoop -jar doesn't recognize that jar.
However same jar works well with windows.
Can you please share
to get a listing of the jar and:
jar xf wordcountsmp/wordcount.jar
To extract it.
and got the error
Unable to access jar file xf
my jar file size is 5kb. I am feeling somehow eclipse export in macOS is
not creating appropriate jar.
On Oct 5, 2011, at 8:16 PM, Brock Noland wrote:
Hi
Hi,
On Tue, Sep 13, 2011 at 12:27 PM, Vivek K hadoop.v...@gmail.com wrote:
Hi all,
I am trying to build a Hadoop/MR application in c++ using hadoop-pipes. I
have been able to successfully work with my own mappers and reducers, but
now I need to generate output (from reducer) in a format
Hi,
On Mon, Sep 19, 2011 at 3:19 PM, Shi Yu sh...@uchicago.edu wrote:
I am stuck again in a probably very simple problem. I couldn't generate the
map output in sequence file format. I always get this error:
java.io.IOException: wrong key class: org.apache.hadoop.io.Text is not class
Hi,
This probably belongs on mapreduce-user as opposed to common-user. I
have BCC'ed the common-user group.
Generally it's a best practice to ship the scripts with the job. Like so:
hadoop jar
/usr/lib/hadoop-0.20/contrib/streaming/hadoop-streaming-0.20.2-cdh3u0.jar
-input
Hi,
On Tue, Sep 6, 2011 at 9:29 AM, Ralf Heyde ralf.he...@gmx.de wrote:
Hello,
I have found a HDFSClient which shows me, how to access my HDFS from inside
the cluster (i.e. running on a Node).
My Idea is, that different processes may write 64M Chunks to HDFS from
external
Hi,
On Thu, Sep 1, 2011 at 9:08 AM, Raimon Bosch raimon.bo...@gmail.com wrote:
Hi,
I'm trying to create a table similar to apache_log but I'm trying to avoid
to write my own map-reduce task because I don't want to have my HDFS files
twice.
So if you're working with log lines like this:
19 matches
Mail list logo