Re: HDFS multi-tenancy and federation

2014-02-05 Thread Shani Ranasinghe
Hi,

Any help on this please?


On Mon, Feb 3, 2014 at 12:14 PM, Shani Ranasinghe shanir...@gmail.comwrote:


 Hi,
 I would like to know the following.

 1) Can there be multiple namespaces in a single namenode? is it
 recommended?  (I'm having a multi-tenant environment in mind)

 2) Let's say I have a federated namespace/namenodes. There are two
 namenodes A /namespace A1 and namenode B/namespace B1, and have 3
 datanodes. Can someone from namespace A1,  access the datanode's data in
 anyway (hacking) belonging to namespace B1. If not how is it handled?

 After going through a lot  of reference, my understanding on HDFS
 multi-tenancy and federation is that for multi-tenancy what we could do is
 use file/folder permissions (u,g,o) and ACL's. Or we could dedicate a
 namespace per tenant. The issue here is that a namenode (active namenode,
 passive namenode and secondary namenode) has to be assigned per tenant.  Is
 there any other way that multi tenancy can be achieved?

 On federation, let's say I have a namenode for /marketing and another for
 /finance. Lets say that marketing bears the most load. How can we load
 balance this? is it possible?

 Appreciate any help on this.

 Regards,
 Shani.






Re: HDFS multi-tenancy and federation

2014-02-05 Thread praveenesh kumar
Hi Shani,

I haven't done any implementation on HDFS federation, but as far as I know,
1 namenode can handle only 1 namespace at this time. I hope that helps.

Regards
Prav


On Wed, Feb 5, 2014 at 8:05 AM, Shani Ranasinghe shanir...@gmail.comwrote:

 Hi,

 Any help on this please?



 On Mon, Feb 3, 2014 at 12:14 PM, Shani Ranasinghe shanir...@gmail.comwrote:


 Hi,
 I would like to know the following.

 1) Can there be multiple namespaces in a single namenode? is it
 recommended?  (I'm having a multi-tenant environment in mind)

 2) Let's say I have a federated namespace/namenodes. There are two
 namenodes A /namespace A1 and namenode B/namespace B1, and have 3
 datanodes. Can someone from namespace A1,  access the datanode's data in
 anyway (hacking) belonging to namespace B1. If not how is it handled?

 After going through a lot  of reference, my understanding on HDFS
 multi-tenancy and federation is that for multi-tenancy what we could do is
 use file/folder permissions (u,g,o) and ACL's. Or we could dedicate a
 namespace per tenant. The issue here is that a namenode (active namenode,
 passive namenode and secondary namenode) has to be assigned per tenant.  Is
 there any other way that multi tenancy can be achieved?

 On federation, let's say I have a namenode for /marketing and another for
 /finance. Lets say that marketing bears the most load. How can we load
 balance this? is it possible?

 Appreciate any help on this.

 Regards,
 Shani.







Where to find a list of most Hadoop2 config file parameters?

2014-02-05 Thread Bill Bruns
Hello Folks,

Is there a fairly complete list somewhere, giving most of the parameters from 
the configuration files for Apache Hadoop2 and MapReduce?
(Including Yarn)

I am especially interested in parameters that affect the activities of the 
ResouceManager and Application Masters, such as the parameters that are found 
in capacity-scheduler.xml
 
Bill 


 From: VJ Shalish vjshal...@gmail.com
To: user@hadoop.apache.org; Raj Hadoop hadoop...@yahoo.com 
Cc: SCM Users scm-us...@cloudera.org 
Sent: Tuesday, February 4, 2014 10:10 PM
Subject: Re: A Hadoop Cluster @Home using CentOS 6.5, Cloudera Manager 4.8.1 
and Cloudera Parcels
 



Hi Raj,
 
Thankyou.
I think you would be able to setup virtual machines in Mac as well.
 
Thanks
Shalish.

On Wed, Feb 5, 2014 at 2:48 AM, Raj Hadoop hadoop...@yahoo.com wrote:


Hi Shalish -


This is a really wonderful work. Let me go through it.


And one more thing - Can we use this setup on a Mac computer. Is it OS 
dependent ? Please advise.


Thanks,
Raj




On Tuesday, February 4, 2014 3:47 PM, VJ Shalish vjshal...@gmail.com wrote:

Hi All,
 
    Based on my research and experience, I am starting a blog with 
detailed steps which would help the Bigdata beginners and freaks to set up 
their own 
Hadoop Cluster in your laptop using the latest Cloudera Manager and Cloudera 
Parcels. I have completed 2 Episodes. Seeking your valuable feedback.
 
http://shalishvj.wordpress.com/2014/02/03/a-hadoop-cluster-home-creating-a-virtual-machine-with-centos-6-5-hadoop-cluster-installation-using-cloudera-manager-and-cloudera-parcels/

http://shalishvj.wordpress.com/2014/02/04/a-hadoop-cluster-home-using-centos-6-5-cloudera-manager-4-8-1-and-cloudera-parcels-episode-2/
 
Thanks
Shalish.



Re: Where to find a list of most Hadoop2 config file parameters?

2014-02-05 Thread Harsh J
Hey Bill,

Most of the documented parameters can be found under the
yarn-default.xml and mapred-default.xml with their descriptions:
http://hadoop.apache.org/docs/stable2/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
and 
http://hadoop.apache.org/docs/stable2/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml.
Is this what you're looking for?

On Wed, Feb 5, 2014 at 2:02 PM, Bill Bruns billbr...@yahoo.com wrote:
 Hello Folks,

 Is there a fairly complete list somewhere, giving most of the parameters
 from the configuration files for Apache Hadoop2 and MapReduce?
 (Including Yarn)

 I am especially interested in parameters that affect the activities of the
 ResouceManager and Application Masters, such as the parameters that are
 found in capacity-scheduler.xml

 Bill
 
 From: VJ Shalish vjshal...@gmail.com
 To: user@hadoop.apache.org; Raj Hadoop hadoop...@yahoo.com
 Cc: SCM Users scm-us...@cloudera.org
 Sent: Tuesday, February 4, 2014 10:10 PM
 Subject: Re: A Hadoop Cluster @Home using CentOS 6.5, Cloudera Manager 4.8.1
 and Cloudera Parcels


 Hi Raj,

 Thankyou.
 I think you would be able to setup virtual machines in Mac as well.

 Thanks
 Shalish.
 On Wed, Feb 5, 2014 at 2:48 AM, Raj Hadoop hadoop...@yahoo.com wrote:


 Hi Shalish -

 This is a really wonderful work. Let me go through it.

 And one more thing - Can we use this setup on a Mac computer. Is it OS
 dependent ? Please advise.

 Thanks,
 Raj


 On Tuesday, February 4, 2014 3:47 PM, VJ Shalish vjshal...@gmail.com
 wrote:
 Hi All,

 Based on my research and experience, I am starting a blog with
 detailed steps which would help the Bigdata beginners and freaks to set up
 their own
 Hadoop Cluster in your laptop using the latest Cloudera Manager and Cloudera
 Parcels. I have completed 2 Episodes. Seeking your valuable feedback.

 http://shalishvj.wordpress.com/2014/02/03/a-hadoop-cluster-home-creating-a-virtual-machine-with-centos-6-5-hadoop-cluster-installation-using-cloudera-manager-and-cloudera-parcels/

 http://shalishvj.wordpress.com/2014/02/04/a-hadoop-cluster-home-using-centos-6-5-cloudera-manager-4-8-1-and-cloudera-parcels-episode-2/

 Thanks
 Shalish.








-- 
Harsh J


hadoop 2.2.0 - test failure in released src pack

2014-02-05 Thread Shani Ranasinghe
Hi,

 I am trying to run the src downloaded from
http://archive.apache.org/dist/hadoop/core/hadoop-2.2.0/ for hadoop 2.2.0.
I encounter the following test failure when a maven build is triggered with
tests. However, builds fine without tests.

Any idea on how to resolve this?

Failed tests:
testLocalHostNameForNullOrWild(org.apache.hadoop.security.TestSecurityUtil):
expected:hdfs/shani-[ThinkPad-T]530@REALM but
was:hdfs/shani-[thinkpad-t]530@REALM

Tests run: 2015, Failures: 1, Errors: 0, Skipped: 64

[INFO]

[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main  SUCCESS [1.147s]
[INFO] Apache Hadoop Project POM . SUCCESS [0.314s]
[INFO] Apache Hadoop Annotations . SUCCESS [1.200s]
[INFO] Apache Hadoop Project Dist POM  SUCCESS [0.444s]
[INFO] Apache Hadoop Assemblies .. SUCCESS [0.241s]
[INFO] Apache Hadoop Maven Plugins ... SUCCESS [3.726s]
[INFO] Apache Hadoop Auth  SUCCESS [4.321s]
[INFO] Apache Hadoop Auth Examples ... SUCCESS [0.540s]
[INFO] Apache Hadoop Common .. FAILURE
[11:47.163s]
[INFO] Apache Hadoop NFS . SKIPPED
[INFO] Apache Hadoop Common Project .. SKIPPED
[INFO] Apache Hadoop HDFS  SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] hadoop-yarn ... SKIPPED
[INFO] hadoop-yarn-api ... SKIPPED
[INFO] hadoop-yarn-common  SKIPPED
[INFO] hadoop-yarn-server  SKIPPED
[INFO] hadoop-yarn-server-common . SKIPPED
[INFO] hadoop-yarn-server-nodemanager  SKIPPED
[INFO] hadoop-yarn-server-web-proxy .. SKIPPED
[INFO] hadoop-yarn-server-resourcemanager  SKIPPED
[INFO] hadoop-yarn-server-tests .. SKIPPED
[INFO] hadoop-yarn-client  SKIPPED
[INFO] hadoop-yarn-applications .. SKIPPED
[INFO] hadoop-yarn-applications-distributedshell . SKIPPED
[INFO] hadoop-mapreduce-client ... SKIPPED
[INFO] hadoop-mapreduce-client-core .. SKIPPED
[INFO] hadoop-yarn-applications-unmanaged-am-launcher  SKIPPED
[INFO] hadoop-yarn-site .. SKIPPED
[INFO] hadoop-yarn-project ... SKIPPED
[INFO] hadoop-mapreduce-client-common  SKIPPED
[INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
[INFO] hadoop-mapreduce-client-app ... SKIPPED
[INFO] hadoop-mapreduce-client-hs  SKIPPED
[INFO] hadoop-mapreduce-client-jobclient . SKIPPED
[INFO] hadoop-mapreduce-client-hs-plugins  SKIPPED
[INFO] Apache Hadoop MapReduce Examples .. SKIPPED
[INFO] hadoop-mapreduce .. SKIPPED
[INFO] Apache Hadoop MapReduce Streaming . SKIPPED
[INFO] Apache Hadoop Distributed Copy  SKIPPED
[INFO] Apache Hadoop Archives  SKIPPED
[INFO] Apache Hadoop Rumen ... SKIPPED
[INFO] Apache Hadoop Gridmix . SKIPPED
[INFO] Apache Hadoop Data Join ... SKIPPED
[INFO] Apache Hadoop Extras .. SKIPPED
[INFO] Apache Hadoop Pipes ... SKIPPED
[INFO] Apache Hadoop Tools Dist .. SKIPPED
[INFO] Apache Hadoop Tools ... SKIPPED
[INFO] Apache Hadoop Distribution  SKIPPED
[INFO] Apache Hadoop Client .. SKIPPED
[INFO] Apache Hadoop Mini-Cluster  SKIPPED
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 12:01.394s
[INFO] Finished at: Wed Feb 05 15:43:09 IST 2014
[INFO] Final Memory: 45M/803M
[INFO]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test)
on project hadoop-common: There are test failures.
[ERROR]
[ERROR] Please refer to
/home/shani/sranasinghe/POC/hadoop-2.2.0-src/hadoop-common-project/hadoop-common/target/surefire-reports
for the 

Re: hadoop 2.2.0 - test failure in released src pack

2014-02-05 Thread Shani Ranasinghe
Hi,

I got it working after modifying the pom file as explained in
https://issues.apache.org/jira/secure/attachment/12614482/HADOOP-10110.patch
.


On Wed, Feb 5, 2014 at 4:18 PM, Shani Ranasinghe shanir...@gmail.comwrote:

 Hi,

  I am trying to run the src downloaded from
 http://archive.apache.org/dist/hadoop/core/hadoop-2.2.0/ for hadoop
 2.2.0. I encounter the following test failure when a maven build is
 triggered with tests. However, builds fine without tests.

 Any idea on how to resolve this?

 Failed tests:
 testLocalHostNameForNullOrWild(org.apache.hadoop.security.TestSecurityUtil):
 expected:hdfs/shani-[ThinkPad-T]530@REALM but
 was:hdfs/shani-[thinkpad-t]530@REALM

 Tests run: 2015, Failures: 1, Errors: 0, Skipped: 64

 [INFO]
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Main  SUCCESS [1.147s]
 [INFO] Apache Hadoop Project POM . SUCCESS [0.314s]
 [INFO] Apache Hadoop Annotations . SUCCESS [1.200s]
 [INFO] Apache Hadoop Project Dist POM  SUCCESS [0.444s]
 [INFO] Apache Hadoop Assemblies .. SUCCESS [0.241s]
 [INFO] Apache Hadoop Maven Plugins ... SUCCESS [3.726s]
 [INFO] Apache Hadoop Auth  SUCCESS [4.321s]
 [INFO] Apache Hadoop Auth Examples ... SUCCESS [0.540s]
 [INFO] Apache Hadoop Common .. FAILURE
 [11:47.163s]
 [INFO] Apache Hadoop NFS . SKIPPED
 [INFO] Apache Hadoop Common Project .. SKIPPED
 [INFO] Apache Hadoop HDFS  SKIPPED
 [INFO] Apache Hadoop HttpFS .. SKIPPED
 [INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
 [INFO] Apache Hadoop HDFS-NFS  SKIPPED
 [INFO] Apache Hadoop HDFS Project  SKIPPED
 [INFO] hadoop-yarn ... SKIPPED
 [INFO] hadoop-yarn-api ... SKIPPED
 [INFO] hadoop-yarn-common  SKIPPED
 [INFO] hadoop-yarn-server  SKIPPED
 [INFO] hadoop-yarn-server-common . SKIPPED
 [INFO] hadoop-yarn-server-nodemanager  SKIPPED
 [INFO] hadoop-yarn-server-web-proxy .. SKIPPED
 [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
 [INFO] hadoop-yarn-server-tests .. SKIPPED
 [INFO] hadoop-yarn-client  SKIPPED
 [INFO] hadoop-yarn-applications .. SKIPPED
 [INFO] hadoop-yarn-applications-distributedshell . SKIPPED
 [INFO] hadoop-mapreduce-client ... SKIPPED
 [INFO] hadoop-mapreduce-client-core .. SKIPPED
 [INFO] hadoop-yarn-applications-unmanaged-am-launcher  SKIPPED
 [INFO] hadoop-yarn-site .. SKIPPED
 [INFO] hadoop-yarn-project ... SKIPPED
 [INFO] hadoop-mapreduce-client-common  SKIPPED
 [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
 [INFO] hadoop-mapreduce-client-app ... SKIPPED
 [INFO] hadoop-mapreduce-client-hs  SKIPPED
 [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
 [INFO] hadoop-mapreduce-client-hs-plugins  SKIPPED
 [INFO] Apache Hadoop MapReduce Examples .. SKIPPED
 [INFO] hadoop-mapreduce .. SKIPPED
 [INFO] Apache Hadoop MapReduce Streaming . SKIPPED
 [INFO] Apache Hadoop Distributed Copy  SKIPPED
 [INFO] Apache Hadoop Archives  SKIPPED
 [INFO] Apache Hadoop Rumen ... SKIPPED
 [INFO] Apache Hadoop Gridmix . SKIPPED
 [INFO] Apache Hadoop Data Join ... SKIPPED
 [INFO] Apache Hadoop Extras .. SKIPPED
 [INFO] Apache Hadoop Pipes ... SKIPPED
 [INFO] Apache Hadoop Tools Dist .. SKIPPED
 [INFO] Apache Hadoop Tools ... SKIPPED
 [INFO] Apache Hadoop Distribution  SKIPPED
 [INFO] Apache Hadoop Client .. SKIPPED
 [INFO] Apache Hadoop Mini-Cluster  SKIPPED
 [INFO]
 
 [INFO] BUILD FAILURE
 [INFO]
 
 [INFO] Total time: 12:01.394s
 [INFO] Finished at: Wed Feb 05 15:43:09 IST 2014
 [INFO] Final Memory: 45M/803M
 [INFO]
 
 

Re: hadoop 2.2.0 - test failure in released src pack

2014-02-05 Thread Shani Ranasinghe
Hi,

However I encountered the following issue. Appreciate any help.

Results :

Failed tests:
testLocalHostNameForNullOrWild(org.apache.hadoop.security.TestSecurityUtil):
expected:hdfs/shani-[ThinkPad-T]530@REALM but
was:hdfs/shani-[thinkpad-t]530@REALM

Tests run: 2015, Failures: 1, Errors: 0, Skipped: 64

[INFO]

[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main  SUCCESS [1.147s]
[INFO] Apache Hadoop Project POM . SUCCESS [0.314s]
[INFO] Apache Hadoop Annotations . SUCCESS [1.200s]
[INFO] Apache Hadoop Project Dist POM  SUCCESS [0.444s]
[INFO] Apache Hadoop Assemblies .. SUCCESS [0.241s]
[INFO] Apache Hadoop Maven Plugins ... SUCCESS [3.726s]
[INFO] Apache Hadoop Auth  SUCCESS [4.321s]
[INFO] Apache Hadoop Auth Examples ... SUCCESS [0.540s]
[INFO] Apache Hadoop Common .. FAILURE
[11:47.163s]
[INFO] Apache Hadoop NFS . SKIPPED
[INFO] Apache Hadoop Common Project .. SKIPPED
[INFO] Apache Hadoop HDFS  SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] hadoop-yarn ... SKIPPED
[INFO] hadoop-yarn-api ... SKIPPED
[INFO] hadoop-yarn-common  SKIPPED
[INFO] hadoop-yarn-server  SKIPPED
[INFO] hadoop-yarn-server-common . SKIPPED
[INFO] hadoop-yarn-server-nodemanager  SKIPPED
[INFO] hadoop-yarn-server-web-proxy .. SKIPPED
[INFO] hadoop-yarn-server-resourcemanager  SKIPPED
[INFO] hadoop-yarn-server-tests .. SKIPPED
[INFO] hadoop-yarn-client  SKIPPED
[INFO] hadoop-yarn-applications .. SKIPPED
[INFO] hadoop-yarn-applications-distributedshell . SKIPPED
[INFO] hadoop-mapreduce-client ... SKIPPED
[INFO] hadoop-mapreduce-client-core .. SKIPPED
[INFO] hadoop-yarn-applications-unmanaged-am-launcher  SKIPPED
[INFO] hadoop-yarn-site .. SKIPPED
[INFO] hadoop-yarn-project ... SKIPPED
[INFO] hadoop-mapreduce-client-common  SKIPPED
[INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
[INFO] hadoop-mapreduce-client-app ... SKIPPED
[INFO] hadoop-mapreduce-client-hs  SKIPPED
[INFO] hadoop-mapreduce-client-jobclient . SKIPPED
[INFO] hadoop-mapreduce-client-hs-plugins  SKIPPED
[INFO] Apache Hadoop MapReduce Examples .. SKIPPED
[INFO] hadoop-mapreduce .. SKIPPED
[INFO] Apache Hadoop MapReduce Streaming . SKIPPED
[INFO] Apache Hadoop Distributed Copy  SKIPPED
[INFO] Apache Hadoop Archives  SKIPPED
[INFO] Apache Hadoop Rumen ... SKIPPED
[INFO] Apache Hadoop Gridmix . SKIPPED
[INFO] Apache Hadoop Data Join ... SKIPPED
[INFO] Apache Hadoop Extras .. SKIPPED
[INFO] Apache Hadoop Pipes ... SKIPPED
[INFO] Apache Hadoop Tools Dist .. SKIPPED
[INFO] Apache Hadoop Tools ... SKIPPED
[INFO] Apache Hadoop Distribution  SKIPPED
[INFO] Apache Hadoop Client .. SKIPPED
[INFO] Apache Hadoop Mini-Cluster  SKIPPED
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 12:01.394s
[INFO] Finished at: Wed Feb 05 15:43:09 IST 2014
[INFO] Final Memory: 45M/803M
[INFO]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test)
on project hadoop-common: There are test failures.
[ERROR]
[ERROR] Please refer to
/home/shani/sranasinghe/POC/hadoop-2.2.0-src/hadoop-common-project/hadoop-common/target/surefire-reports
for the individual test results.
[ERROR] - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test
(default-test) on 

Re: what happens to a client attempting to get a new app when the resource manager is already down

2014-02-05 Thread Vinod Kumar Vavilapalli
Is this on trunk or a released version?

I think the default behavior (when RM HA is not enabled) shouldn't have client  
loop forever. Let me know and we can see if this needs fixing.

Thanks,
+vinod


On Jan 31, 2014, at 7:52 AM, REYANE OUKPEDJO r.oukpe...@yahoo.com wrote:

 Hi there,
 
 I am trying to solve a problem. My client run as a server. And was trying to 
 make my client aware about the fact the resource manager is down but I could 
 not figure out. The reason is that the call :  
 yarnClient.createApplication(); never return when the resource manager is 
 down. However it just stay in a loops and sleep after 10 iteration and 
 continue the same loops. Below you can find the logs. Any idea how to leave 
 this loop ? is there any parameter that control the number of seconds before 
 giving up.
 
 Thanks
 
 Reyane OUKPEDJO
 
 
 
 
 
 
 
 logs
 14/01/31 10:48:05 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 8 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:06 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 9 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:37 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 0 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:38 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 1 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:39 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 2 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:40 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 3 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:41 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 4 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:42 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 5 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:43 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 6 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:44 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 7 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:45 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 8 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:48:46 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 9 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:49:17 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 0 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:49:18 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 1 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:49:19 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 2 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:49:20 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 3 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:49:21 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 4 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 14/01/31 10:49:22 INFO ipc.Client: Retrying connect to server: 
 isblade2/9.32.160.125:8032. Already tried 5 time(s); retry policy is 
 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, 

Re: kerberos principals per node necessary?

2014-02-05 Thread Vinod Kumar Vavilapalli
For helping manage this, Hadoop lets you specify principles of the format 
hdfs/_HOST@SOME-REALM. Here _HOST is a special string that Hadoop interprets 
and replaces it with the local hostname. You need to create principles per host 
though.

+Vinod

On Feb 2, 2014, at 3:14 PM, Koert Kuipers ko...@tresata.com wrote:

 is it necessary to create a kerberos principal for hdfs on every node, as in 
 hdfs/some-host@SOME-REALM?
 why not use one principal hdfs@SOME-REALM? that way i could distribute the 
 same keytab file to all nodes which makes things a lot easier.
 thanks! koert


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: hadoop 2.2.0 - test failure in released src pack

2014-02-05 Thread Yogi Nerella
you might have looked at it, but some how your hostname is returning
uppercase and lowercase.   Can you look in config files and see if it is in
two different cases?




On Wed, Feb 5, 2014 at 3:27 AM, Shani Ranasinghe shanir...@gmail.comwrote:

 Hi,

 However I encountered the following issue. Appreciate any help.

 Results :


 Failed tests:
 testLocalHostNameForNullOrWild(org.apache.hadoop.security.TestSecurityUtil):
 expected:hdfs/shani-[ThinkPad-T]530@REALM but
 was:hdfs/shani-[thinkpad-t]530@REALM

 Tests run: 2015, Failures: 1, Errors: 0, Skipped: 64

 [INFO]
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Main  SUCCESS [1.147s]
 [INFO] Apache Hadoop Project POM . SUCCESS [0.314s]
 [INFO] Apache Hadoop Annotations . SUCCESS [1.200s]
 [INFO] Apache Hadoop Project Dist POM  SUCCESS [0.444s]
 [INFO] Apache Hadoop Assemblies .. SUCCESS [0.241s]
 [INFO] Apache Hadoop Maven Plugins ... SUCCESS [3.726s]
 [INFO] Apache Hadoop Auth  SUCCESS [4.321s]
 [INFO] Apache Hadoop Auth Examples ... SUCCESS [0.540s]
 [INFO] Apache Hadoop Common .. FAILURE
 [11:47.163s]
 [INFO] Apache Hadoop NFS . SKIPPED
 [INFO] Apache Hadoop Common Project .. SKIPPED
 [INFO] Apache Hadoop HDFS  SKIPPED
 [INFO] Apache Hadoop HttpFS .. SKIPPED
 [INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
 [INFO] Apache Hadoop HDFS-NFS  SKIPPED
 [INFO] Apache Hadoop HDFS Project  SKIPPED
 [INFO] hadoop-yarn ... SKIPPED
 [INFO] hadoop-yarn-api ... SKIPPED
 [INFO] hadoop-yarn-common  SKIPPED
 [INFO] hadoop-yarn-server  SKIPPED
 [INFO] hadoop-yarn-server-common . SKIPPED
 [INFO] hadoop-yarn-server-nodemanager  SKIPPED
 [INFO] hadoop-yarn-server-web-proxy .. SKIPPED
 [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
 [INFO] hadoop-yarn-server-tests .. SKIPPED
 [INFO] hadoop-yarn-client  SKIPPED
 [INFO] hadoop-yarn-applications .. SKIPPED
 [INFO] hadoop-yarn-applications-distributedshell . SKIPPED
 [INFO] hadoop-mapreduce-client ... SKIPPED
 [INFO] hadoop-mapreduce-client-core .. SKIPPED
 [INFO] hadoop-yarn-applications-unmanaged-am-launcher  SKIPPED
 [INFO] hadoop-yarn-site .. SKIPPED
 [INFO] hadoop-yarn-project ... SKIPPED
 [INFO] hadoop-mapreduce-client-common  SKIPPED
 [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
 [INFO] hadoop-mapreduce-client-app ... SKIPPED
 [INFO] hadoop-mapreduce-client-hs  SKIPPED
 [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
 [INFO] hadoop-mapreduce-client-hs-plugins  SKIPPED
 [INFO] Apache Hadoop MapReduce Examples .. SKIPPED
 [INFO] hadoop-mapreduce .. SKIPPED
 [INFO] Apache Hadoop MapReduce Streaming . SKIPPED
 [INFO] Apache Hadoop Distributed Copy  SKIPPED
 [INFO] Apache Hadoop Archives  SKIPPED
 [INFO] Apache Hadoop Rumen ... SKIPPED
 [INFO] Apache Hadoop Gridmix . SKIPPED
 [INFO] Apache Hadoop Data Join ... SKIPPED
 [INFO] Apache Hadoop Extras .. SKIPPED
 [INFO] Apache Hadoop Pipes ... SKIPPED
 [INFO] Apache Hadoop Tools Dist .. SKIPPED
 [INFO] Apache Hadoop Tools ... SKIPPED
 [INFO] Apache Hadoop Distribution  SKIPPED
 [INFO] Apache Hadoop Client .. SKIPPED
 [INFO] Apache Hadoop Mini-Cluster  SKIPPED
 [INFO]
 
 [INFO] BUILD FAILURE
 [INFO]
 
 [INFO] Total time: 12:01.394s
 [INFO] Finished at: Wed Feb 05 15:43:09 IST 2014
 [INFO] Final Memory: 45M/803M
 [INFO]
 
 [ERROR] Failed to execute goal
 org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test)
 on project hadoop-common: There are test failures.
 [ERROR]
 [ERROR] 

RE: what happens to a client attempting to get a new app when the resource manager is already down

2014-02-05 Thread Rohith Sharma K S
   Default Retry time period is 15 minutes. Setting configuration  
yarn.resourcemanager.connect.max-wait.ms to lesser value,  retry period can 
be reduced in client side.

Thanks   Regards
Rohith Sharma K S

From: Vinod Kumar Vavilapalli [mailto:vino...@hortonworks.com] On Behalf Of 
Vinod Kumar Vavilapalli
Sent: 05 February 2014 22:43
To: user@hadoop.apache.org; REYANE OUKPEDJO
Subject: Re: what happens to a client attempting to get a new app when the 
resource manager is already down

Is this on trunk or a released version?

I think the default behavior (when RM HA is not enabled) shouldn't have client  
loop forever. Let me know and we can see if this needs fixing.

Thanks,
+vinod


On Jan 31, 2014, at 7:52 AM, REYANE OUKPEDJO 
r.oukpe...@yahoo.commailto:r.oukpe...@yahoo.com wrote:


Hi there,

I am trying to solve a problem. My client run as a server. And was trying to 
make my client aware about the fact the resource manager is down but I could 
not figure out. The reason is that the call :  yarnClient.createApplication(); 
never return when the resource manager is down. However it just stay in a loops 
and sleep after 10 iteration and continue the same loops. Below you can find 
the logs. Any idea how to leave this loop ? is there any parameter that control 
the number of seconds before giving up.

Thanks

Reyane OUKPEDJO







logs
14/01/31 10:48:05 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 8 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:06 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 9 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:37 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:38 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:39 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:40 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 3 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:41 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 4 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:42 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 5 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:43 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 6 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:44 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 7 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:45 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 8 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:48:46 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 9 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:49:17 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:49:18 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:49:19 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:49:20 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 3 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:49:21 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 4 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/31 10:49:22 INFO ipc.Client: Retrying connect to server: 
isblade2/9.32.160.125:8032. Already tried 5 time(s); retry policy