Hello,
If you want to use drive /dev/xvda4 only, then add file location for
'/dev/xvda4' and remove the file location for '/dev/xvda2' under
dfs.datanode.data.dir.
After the changes restart the hadoop services and check the available
space using the below command.
# hadoop fs -df -h
Hi,
In mapreduce with reduce output format of NullWritable,
AvroValuemyavroclass
Is it possible for me to get the reduce output without the string
key:none and value , i.e. only the content of myavroclass itself??
Thanks!
what i get now:
{u'value': {u'MYCOLUMN1': 'Hello', u'MYCOLUMN2':
I have been working with MapReduce1, (JobTracker and TaskTrakers).
Some of my jobs I want to define the number of reduces to the maximum
capacity of my cluster.
I did it with this:
int max = new JobClient(new
JobConf(jConf)).getClusterStatus().getMaxReduceTasks();
Job job = new Job(jConf,
Avro data should be a binary format in its own version. Why you got something
like JSON?
What output format class you use?
Yong
Date: Fri, 3 Oct 2014 17:34:35 +0800
Subject: avro mapreduce output
From: delim123...@gmail.com
To: user@hadoop.apache.org
Hi,
In mapreduce with reduce output format
In the MR1, the max reducer is a static value set in the mapred-site.xml. That
is the value you get in the API.
In the YARN, there is no such static value any more, so you can set any value
you like, it is up to RM to decide at runtime, how many reducer tasks are
available or can be granted to
Hi.
(For Hadoop 2.2.0)
Nope. Number of mappers depends on number of splits (392 line of
JobSubmitter).
Number of reducers depends on property mapreduce.job.reduces.
So, you can setup like this:
final Configuration configuration = new Configuration();
configuration.set(mapreduce.job.reduces,
Jay,
I have not tried the bigtop hcfs tests. Any tips on how to get started with
those?
Our configuration looks similar except for the Gluster specific options and
both *fs.default.name http://fs.default.name *(and *fs.defaultFS*) as we
don't want OrangeFS to be the default fs for this Hadoop
I posted this on users@hbase, but got not response, so I thought I’d try here:
I’m trying to use ExportSnapshot to copy a snapshot from a Hadoop 1 to a Hadoop
2 cluster using the webhdfs protocol.
I’ve done this successfully before, though there are always mapper failures and
retries in the job
Hi people,
I´m doing some experiments with hadoop 1.2.1 running the wordcount
sample on an 8 nodes cluster (master + 7 slaves). Tuning the tasks
configuration I´ve been able to make the map phase run on 22 minutes..
However the reduce phase (which consists of a single job) stucks at some
Thank You Very much. This is what I am trying to do.
This is what storage I have.
Filesystem Size Used Avail Use%
Mounted on
/dev/xvda2 5.9G 5.3G 238M 96% /
/dev/xvda4 7.9G
Hi Experts,
I am trying to find an equivalent JAVA API for kerberos kinit linux
command so we can programatically generate the ticket cache file
(note: kerberos userpassword will be given in the first place) and
then use the ticket cache file to authentication the user by calling
This is what I did:
579 svn checkout
http://svn.apache.org/repos/asf/directory/apacheds/trunk-with-dependencies
apacheds
580 cd apacheds/
581 find . -name Kinit.*
I found:
apacheds/kerberos-client/src/main/java/org/apache/directory/kerberos/client/Kinit.java
Cheers
On Fri, Oct 3, 2014
We had a datanode go down, and our datacenter guy swapped out the disk. We
had moved to using UUIDs in the /etc/fstab, but he wanted to use the
/dev/id format. He didn't backup the fstab, however I'm not sure that's the
issue.
I am reading in the log below that the namenode has a lock on the
Looks like you're facing the same problem as this SO.
http://stackoverflow.com/questions/10705140/hadoop-datanode-fails-to-start-throwing-org-apache-hadoop-hdfs-server-common-sto
Try the suggested fix.
On Fri, Oct 3, 2014 at 6:57 PM, Colin Kincaid Williams disc...@uw.edu
wrote:
We had a
14 matches
Mail list logo