How will Hadoop handle it when a datanode server with total hardware failure?

2014-12-16 Thread arthur.hk.c...@gmail.com
Hi, If each of my datanode servers has 8 hard disks (a 10-node cluster) and I use the default replication factor of 3, how will Hadoop handle it when a datanode with total hardware failure suddenly? Regards Arthur

To Generate Test Data in HDFS (PDGF)

2014-09-22 Thread arthur.hk.c...@gmail.com
Hi, I need to generate large amount of test data (4TB) into Hadoop, has anyone used PDGF to do so? Could you share your cook book about PDGF in Hadoop (or HBase)? Many Thanks Arthur

Re: Hadoop 2.4.1 Compilation, How to specify HadoopBuildVersion and RMBuildVersion

2014-09-14 Thread arthur.hk.c...@gmail.com
} Regards Arthur On 14 Sep, 2014, at 4:17 pm, Liu, Yi A yi.a@intel.com wrote: Change Hadoop version : mvn versions:set -DnewVersion=NEWVERSION Regards, Yi Liu From: arthur.hk.c...@gmail.com [mailto:arthur.hk.c...@gmail.com] Sent: Sunday, September 14, 2014 1:51 PM To: user

Re: Hadoop 2.4.1 Compilation, How to specify HadoopBuildVersion and RMBuildVersion

2014-09-14 Thread arthur.hk.c...@gmail.com
Hi, Is there any document that lists all possible -D parameters that are used in Hadoop compilation? or any ideas about version-info.scm.commit? Regards Arthur On 14 Sep, 2014, at 7:07 pm, arthur.hk.c...@gmail.com arthur.hk.c...@gmail.com wrote: Hi, Thank you very much for your reply

Hadoop 2.4.1 Compilation, How to specify HadoopBuildVersion and RMBuildVersion

2014-09-13 Thread arthur.hk.c...@gmail.com
Hi, To compile Hadoop 2.4.1 , any idea how to specify “hadoop.build.version” ? By modifying pom.xml? or add -Dhadoop.build.version=mybuild? or specify it by compile command line? Regards Arthur

Re: Hadoop 2.4.1 Compilation, How to specify HadoopBuildVersion and RMBuildVersion

2014-09-13 Thread arthur.hk.c...@gmail.com
(attached print screen) On 14 Sep, 2014, at 1:25 pm, arthur.hk.c...@gmail.com arthur.hk.c...@gmail.com wrote: Hi, To compile Hadoop 2.4.1 , any idea how to specify “hadoop.build.version” ? By modifying pom.xml? or add -Dhadoop.build.version=mybuild? or specify it by compile

Hadoop Smoke Test

2014-09-11 Thread arthur.hk.c...@gmail.com
Hi, I am trying the smoke test for hadoop, “terasort”, during the Map phase, I found “Container killed by the ApplicationMast”, should I stop this job and try to run it again? or just let it continue? 14/09/11 21:27:53 INFO mapreduce.Job: map 22% reduce 0% 14/09/11 21:31:33 INFO

Hadoop Smoke Test: TERASORT

2014-09-10 Thread arthur.hk.c...@gmail.com
Hi, I am trying the smoke test for Hadoop (2.4.1). About “terasort”, below is my test command, the Map part was completed very fast because it was split into many subtasks, however the Reduce part takes very long time and only 1 running Reduce job. Is there a way speed up the reduce phase by

org.apache.hadoop.io.compress.SnappyCodec not found

2014-08-28 Thread arthur.hk.c...@gmail.com
Hi, I use Hadoop 2.4.1, I got org.apache.hadoop.io.compress.SnappyCodec not found” error: hadoop checknative 14/08/29 02:54:51 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version 14/08/29 02:54:51 INFO zlib.ZlibFactory: Successfully

Hadoop 2.4.1 How to clear usercache

2014-08-20 Thread arthur.hk.c...@gmail.com
Hi, i use Hadoop 2.4.1, in my cluster, Non DFS Used: 2.09 TB I found that these files are all under tmp/nm-local-dir/usercache Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ? Regards Arthur

Re: Hadoop 2.4.1 Snappy Smoke Test failed

2014-08-20 Thread arthur.hk.c...@gmail.com
by the fact that hadoop no longer ships with 64bit libs? https://issues.apache.org/jira/browse/HADOOP-9911 - André On Tue, Aug 19, 2014 at 5:40 PM, arthur.hk.c...@gmail.com arthur.hk.c...@gmail.com wrote: Hi, I am trying Snappy in Hadoop 2.4.1, here are my steps: (CentOS 64-bit) 1

Hadoop 2.4.1 Snappy Smoke Test failed

2014-08-19 Thread arthur.hk.c...@gmail.com
Hi, I am trying Snappy in Hadoop 2.4.1, here are my steps: (CentOS 64-bit) 1) yum install snappy snappy-devel 2) added the following (core-site.xml) property nameio.compression.codecs/name

Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

2014-08-11 Thread arthur.hk.c...@gmail.com
Hi I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

2014-08-11 Thread arthur.hk.c...@gmail.com
at a time. Regards Arthur On 11 Aug, 2014, at 11:04 pm, arthur.hk.c...@gmail.com arthur.hk.c...@gmail.com wrote: Hi I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

2014-08-11 Thread arthur.hk.c...@gmail.com
. Thanks Xuan Gong On Mon, Aug 11, 2014 at 9:45 AM, arthur.hk.c...@gmail.com arthur.hk.c...@gmail.com wrote: Hi, If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ? Unlike ./sbin

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

2014-08-11 Thread arthur.hk.c...@gmail.com
if the failover happens. We can monitor the status of RMs by using the command-line (you did previously) or from webUI/webService (rm_address:portnumber/cluster/cluster). We can get the current status from there. Thanks Xuan Gong On Mon, Aug 11, 2014 at 5:12 PM, arthur.hk.c...@gmail.com

Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

2014-08-05 Thread arthur.hk.c...@gmail.com
Hi I have set up the Hadoop 2.4.1 with HDFS High Availability using the Quorum Journal Manager. I am verifying Automatic Failover: I manually used “kill -9” command to disable all running Hadoop services in active node (NN-1), I can find that the Standby node (NN-2) now becomes ACTIVE now

Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

2014-08-04 Thread arthur.hk.c...@gmail.com
Hi, I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node, Please advise Regards Arthur 2014-08-04 18:54:40,453 WARN

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

2014-08-04 Thread arthur.hk.c...@gmail.com
is not transition to ACTIVE..? Please check ZKFC logs,, Mostly this might not happen from the logs you pasted Thanks Regards Brahma Reddy Battula From: arthur.hk.c...@gmail.com [arthur.hk.c...@gmail.com] Sent: Monday, August 04, 2014 4:38 PM To: user@hadoop.apache.org Cc: arthur.hk.c

Compile Hadoop 2.4.1 (with Tests and Without Tests)

2014-08-03 Thread arthur.hk.c...@gmail.com
Hi, I am trying to compile Hadoop 2.4.1. If I run mvm clean install -DskipTests, the compilation is GOOD, However, if I run mvn clean install”, i.e. didn’t skip the Tests, it returned “Failures” Can anyone please advise what should be prepared before unit tests in compilation? From

ResourceManager version and Hadoop version

2014-08-03 Thread arthur.hk.c...@gmail.com
Hi, I am running Apache Hadoop Cluster 2.4.1, I have two questions about Hadoop HTML link http://test_namenode:8088/cluster/cluster, 1) If I click Server metrics” to the page of http://test_namenode::8088/metrics, it is blank. Can anyone please advise if this is normal or I have not yet setup

Re: Hadoop 2.4.0 How to change Configured Capacity

2014-08-02 Thread arthur.hk.c...@gmail.com
, arthur.hk.c...@gmail.com arthur.hk.c...@gmail.com wrote: Hi, I have installed Hadoop 2.4.0 with 5 nodes, each node physically has 4T hard disk, when checking the configured capacity, I found it is about 49.22 GB per node, can anyone advise how to set bigger “configured capacity” e.g. 2T