Did you overwrite the partitioner as well?
2013/10/29 java8964 java8964 java8...@hotmail.com
Hi, I have a strange question related to my secondary sort implementation
in the MR job.
Currently I need to support 2nd sort in one of my MR job. I implemented my
custom WritableComparable like
No, you do not require the BackupNode in a HA NN configuration.
On Tue, Oct 29, 2013 at 9:31 AM, ch huang justlo...@gmail.com wrote:
ATT
--
Harsh J
The setup and cleanup threads are run only once per task attempt.
On Mon, Oct 28, 2013 at 11:02 PM, Shashank Gupta
shashank91.b...@gmail.com wrote:
A little late to the party but I have a concluding question :-
What about Setup and Cleanup functions? Will each thread invoke those
functions
Yes.
The Partitioner uses the same hashcode() on the String generated from the (type
+ /MM/DD).
I add the log in the GroupComparator, and observed there are only 11 unique
values being compared in the GroupComparator, but don't know why the reducers
input group number is much higher than
I see that the Namenode always reports datanodes as having about 5% more
space then they actually do.
And I recently added some smaller datanodes into the cluster and the drives
filled up to 100%, not respecting the 5GB I had reserved for Map Reduce
with this property from mapred-site.xml:
Hi,
Can someone help me? I am trying to build and compile hadoop along with the
native library in debug mode but after I finish I realized from the compiler
logs that the native libraries are build with a mix of both debug (-g) and
optimized (-O) mode . What I have done is that I
Hi,
Can someone help me? I am trying to build and compile hadoop along with the
native library in debug mode but after I finish I realized from the compiler
logs that the native libraries are build with a mix of both debug (-g) and
optimized (-O) mode . What I have done is that I
I have a strange use case and I'm looking for some debugging help.
Use Case:
If I run the hadoop mapped example wordcount program and write the output
to HDFS, the output directory has the correct ownership.
E.g.
hadoop jar
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.5-alpha.jar
You don't need, if the wiki page is correct.
Best Regards,
Raymond Liu
From: ch huang [mailto:justlo...@gmail.com]
Sent: Tuesday, October 29, 2013 12:01 PM
To: user@hadoop.apache.org
Subject: if i configed NN HA,should i still need start backup node?
ATT
Hi
I am playing with YARN 2.2, try to porting some code from pre-beta API
on to the stable API. While both the wiki doc and API doc for 2.2.0 seems still
stick with the old API. Though I could find some help from
Hi Raymond
You may get some help from https://github.com/hortonworks/simple-yarn-app.
It's a simple app written with YARN-beta API
Thanks,
Jian
On Tue, Oct 29, 2013 at 6:46 PM, Liu, Raymond raymond@intel.com wrote:
Hi
I am playing with YARN 2.2, try to porting some code from
ATT
Yes, you are correct: using fsck tool I found some files in my cluster
expected more replications than the value defined in dfs.replication. If I
set the expected replication of this files to a proper number, the
decommissioning process will go smoothly and the datanode could be
decommissioned
For each task the setup and cleanup runs.
On Tue, Oct 29, 2013 at 5:42 PM, Harsh J ha...@cloudera.com wrote:
The setup and cleanup threads are run only once per task attempt.
On Mon, Oct 28, 2013 at 11:02 PM, Shashank Gupta
shashank91.b...@gmail.com wrote:
A little late to the party but I
I am emiting two 2D double array as key and value.I am in construction of
my WritableComparable class.
*public class MF implements WritableComparableMF{
/**
* @param args
*/private double[][] value;
public MF() {
// TODO Auto-generated constructor stub
}
public MF(double[][] value) {
15 matches
Mail list logo