RE: unable to use hdfs haadmin -ns nameserviceId on HDP2.1 , HDFS2.4.0.2.1

2014-11-03 Thread Brahma Reddy Battula
Yes, you can get, but you need to use two commands like following..

1) Need to get nameservices
2) dfs.ha.namenodes.${dfs.nameservices}.


[root@linux156 bin]# ./hdfs getconf -confKey dfs.nameservices
hacluster
[root@linux156 bin]# ./hdfs getconf -confKey dfs.ha.namenodes.hacluster
100,101




Thanks  Regards
Brahma Reddy Battula


From: Devopam Mittra [devo...@gmail.com]
Sent: Monday, November 03, 2014 1:23 PM
To: common-dev@hadoop.apache.org
Subject: Re: unable to use hdfs haadmin -ns nameserviceId on HDP2.1 , 
HDFS2.4.0.2.1

hi Brahma,
Thanks a lot for the clarification.

This brings me to my next question: Is there a way I can systematically
retrieve the serviceId s using any CLI command , instead of reading it from
conf xmls ?


regards
Devopam


On Mon, Nov 3, 2014 at 12:51 PM, Brahma Reddy Battula 
brahmareddy.batt...@huawei.com wrote:

 Hi Mittra


 As far as I knew, hdfs haadmin -ns is not availble..(please someone can
 correct, If I am wrong..)


 Please check following help command for haadmin..

 [root@linux bin]# ./hdfs haadmin
 No GC_PROFILE is given. Defaults to medium.
 Usage: DFSHAAdmin [-ns nameserviceId]
 [-transitionToActive serviceId [--forceactive]]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]


 Example:

 if your service id's are like nn1,nn2

 then you can try like,, hdfs haadmin -getServiceState nn1.


 Thanks  Regards
 Brahma Reddy Battula
 
 From: Devopam Mittra [devo...@gmail.com]
 Sent: Monday, November 03, 2014 10:33 AM
 To: common-dev@hadoop.apache.org
 Subject: unable to use hdfs haadmin -ns nameserviceId on HDP2.1 ,
 HDFS2.4.0.2.1

 hi All,
 I wanted to systematically probe and list all my namenode(s) via CLI.
 An easier way was to maintain a masters conf file , but I want to
 dynamically find the same leveraging the system metadata.

 In process of trying to find the same , I tried to use
 hdfs haadmin -ns ... but to my dismay the command always returns Missing
 Command on the console as if I am doing something syntactically wrong.

 I know my nameserviceId and I tried both capital letters and all small
 (just to be sure to not commit a silly mistake).

 Can you please help me in trying to figure out what I am doing wrong here.


 --
 Devopam Mittra
 Life and Relations are not binary




--
Devopam Mittra
Life and Relations are not binary


[jira] [Created] (HADOOP-11261) Set custom endpoint for S3A

2014-11-03 Thread Thomas Demoor (JIRA)
Thomas Demoor created HADOOP-11261:
--

 Summary: Set custom endpoint for S3A
 Key: HADOOP-11261
 URL: https://issues.apache.org/jira/browse/HADOOP-11261
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Thomas Demoor


Use a config setting to allow customizing the used AWS region. 

It also enables using a custom url pointing to an S3-compatible object store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: unable to use hdfs haadmin -ns nameserviceId on HDP2.1 , HDFS2.4.0.2.1

2014-11-03 Thread Devopam Mittra
hi Brahma,
Thanks so much for the pointers.
I am able to achieve my goal by extending the help provided and proceed
with the automated script I am trying to build.
Shall share for common benefit after initial testing.

warm regards
Devopam


On Mon, Nov 3, 2014 at 1:40 PM, Brahma Reddy Battula 
brahmareddy.batt...@huawei.com wrote:

 Yes, you can get, but you need to use two commands like following..

 1) Need to get nameservices
 2) dfs.ha.namenodes.${dfs.nameservices}.


 [root@linux156 bin]# ./hdfs getconf -confKey dfs.nameservices
 hacluster
 [root@linux156 bin]# ./hdfs getconf -confKey dfs.ha.namenodes.hacluster
 100,101




 Thanks  Regards
 Brahma Reddy Battula

 
 From: Devopam Mittra [devo...@gmail.com]
 Sent: Monday, November 03, 2014 1:23 PM
 To: common-dev@hadoop.apache.org
 Subject: Re: unable to use hdfs haadmin -ns nameserviceId on HDP2.1 ,
 HDFS2.4.0.2.1

 hi Brahma,
 Thanks a lot for the clarification.

 This brings me to my next question: Is there a way I can systematically
 retrieve the serviceId s using any CLI command , instead of reading it from
 conf xmls ?


 regards
 Devopam


 On Mon, Nov 3, 2014 at 12:51 PM, Brahma Reddy Battula 
 brahmareddy.batt...@huawei.com wrote:

  Hi Mittra
 
 
  As far as I knew, hdfs haadmin -ns is not availble..(please someone can
  correct, If I am wrong..)
 
 
  Please check following help command for haadmin..
 
  [root@linux bin]# ./hdfs haadmin
  No GC_PROFILE is given. Defaults to medium.
  Usage: DFSHAAdmin [-ns nameserviceId]
  [-transitionToActive serviceId [--forceactive]]
  [-transitionToStandby serviceId]
  [-failover [--forcefence] [--forceactive] serviceId serviceId]
  [-getServiceState serviceId]
  [-checkHealth serviceId]
  [-help command]
 
 
  Example:
 
  if your service id's are like nn1,nn2
 
  then you can try like,, hdfs haadmin -getServiceState nn1.
 
 
  Thanks  Regards
  Brahma Reddy Battula
  
  From: Devopam Mittra [devo...@gmail.com]
  Sent: Monday, November 03, 2014 10:33 AM
  To: common-dev@hadoop.apache.org
  Subject: unable to use hdfs haadmin -ns nameserviceId on HDP2.1 ,
  HDFS2.4.0.2.1
 
  hi All,
  I wanted to systematically probe and list all my namenode(s) via CLI.
  An easier way was to maintain a masters conf file , but I want to
  dynamically find the same leveraging the system metadata.
 
  In process of trying to find the same , I tried to use
  hdfs haadmin -ns ... but to my dismay the command always returns Missing
  Command on the console as if I am doing something syntactically wrong.
 
  I know my nameserviceId and I tried both capital letters and all small
  (just to be sure to not commit a silly mistake).
 
  Can you please help me in trying to figure out what I am doing wrong
 here.
 
 
  --
  Devopam Mittra
  Life and Relations are not binary
 



 --
 Devopam Mittra
 Life and Relations are not binary




-- 
Devopam Mittra
Life and Relations are not binary


[jira] [Created] (HADOOP-11262) Enable YARN to use S3A

2014-11-03 Thread Thomas Demoor (JIRA)
Thomas Demoor created HADOOP-11262:
--

 Summary: Enable YARN to use S3A 
 Key: HADOOP-11262
 URL: https://issues.apache.org/jira/browse/HADOOP-11262
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Thomas Demoor


Uses DelegateToFileSystem to expose S3A as an AbstractFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11227) error when building hadoop on windows

2014-11-03 Thread milq (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

milq resolved HADOOP-11227.
---
Resolution: Fixed
  Assignee: milq

i was using jdks 1.6 , using jdk 1.7 will solve the issue 

 error when building hadoop on windows  
 ---

 Key: HADOOP-11227
 URL: https://issues.apache.org/jira/browse/HADOOP-11227
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: milq
Assignee: milq

 [INFO] 
 
 [INFO] Building hadoop-mapreduce-client-app 2.2.0
 [INFO] 
 
 [INFO]
 [INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ 
 hadoop-mapreduce-clie
 nt-app ---
 [INFO] Executing tasks
 main:
 [INFO] Executed tasks
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
 hadoop-map
 reduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
 hadoop-mapred
 uce-client-app ---
 [INFO] Nothing to compile - all classes are up to date
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
 ha
 doop-mapreduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
 hadoo
 p-mapreduce-client-app ---
 [INFO] Compiling 29 source files to 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapr
 educe-client\hadoop-mapreduce-client-app\target\test-classes
 [INFO] -
 [ERROR] COMPILATION ERROR :
 [INFO] -
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1491,60] incomparable types: java.lang.Enumcapture#698 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1495,67] incomparable types: java.lang.Enumcapture#215 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [INFO] 2 errors
 [INFO] -
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Main  SUCCESS [7.601s]
 [INFO] Apache Hadoop Project POM . SUCCESS [7.254s]
 [INFO] Apache Hadoop Annotations . SUCCESS [7.177s]
 [INFO] Apache Hadoop Assemblies .. SUCCESS [0.604s]
 [INFO] Apache Hadoop Project Dist POM  SUCCESS [6.864s]
 [INFO] Apache Hadoop Maven Plugins ... SUCCESS [8.371s]
 [INFO] Apache Hadoop Auth  SUCCESS [5.966s]
 [INFO] Apache Hadoop Auth Examples ... SUCCESS [4.492s]
 [INFO] Apache Hadoop Common .. SUCCESS [7:26.231s]
 [INFO] Apache Hadoop NFS . SUCCESS [20.858s]
 [INFO] Apache Hadoop Common Project .. SUCCESS [0.093s]
 [INFO] Apache Hadoop HDFS  SUCCESS [8:10.985s]
 [INFO] Apache Hadoop HttpFS .. SUCCESS [1:00.932s]
 [INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [17.207s]
 [INFO] Apache Hadoop HDFS-NFS  SUCCESS [12.950s]
 [INFO] Apache Hadoop HDFS Project  SUCCESS [0.104s]
 [INFO] hadoop-yarn ... SUCCESS [1.943s]
 [INFO] hadoop-yarn-api ... SUCCESS [2:39.214s]
 [INFO] hadoop-yarn-common  SUCCESS [1:15.391s]
 [INFO] hadoop-yarn-server  SUCCESS [0.278s]
 [INFO] hadoop-yarn-server-common . SUCCESS [14.293s]
 [INFO] hadoop-yarn-server-nodemanager  SUCCESS [25.848s]
 [INFO] hadoop-yarn-server-web-proxy .. SUCCESS [5.866s]
 [INFO] hadoop-yarn-server-resourcemanager  SUCCESS [39.821s]
 [INFO] hadoop-yarn-server-tests .. SUCCESS [0.645s]
 [INFO] hadoop-yarn-client  SUCCESS [6.714s]
 [INFO] hadoop-yarn-applications .. SUCCESS [0.454s]
 [INFO] hadoop-yarn-applications-distributedshell . SUCCESS [3.555s]
 [INFO] hadoop-mapreduce-client ... SUCCESS [0.292s]
 

Re: [jira] [Created] (HADOOP-11260) Patch up Jetty to disable SSLv3

2014-11-03 Thread Tsuyoshi OZAWA
Hi Brend,

Could you write your comment on JIRA? Sometimes discussion on separate
places can occur confusion.

Thanks,
- Tsuyoshi

On Mon, Nov 3, 2014 at 4:30 AM, Bernd Eckenfels e...@zusammenkunft.net wrote:
 Am Sun, 2 Nov 2014 19:27:33 + (UTC)
 schrieb Karthik Kambatla (JIRA) j...@apache.org:

 Hadoop uses an older version of Jetty that allows SSLv3. We should
 fix it up.

 What about TLSv1.0 - in a contained eco system it might be good to aim
 for modern compatibility:

 https://wiki.mozilla.org/Security/Server_Side_TLS

 It was only a near miss, that BEAST did not compromise TLSv1.0

 Greetings
 Bernd



-- 
- Tsuyoshi


[jira] [Created] (HADOOP-11263) NativeS3FileSystem doesn't work with hadoop-client

2014-11-03 Thread Sangjin Lee (JIRA)
Sangjin Lee created HADOOP-11263:


 Summary: NativeS3FileSystem doesn't work with hadoop-client
 Key: HADOOP-11263
 URL: https://issues.apache.org/jira/browse/HADOOP-11263
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Sangjin Lee


When you start using the NativeS3FileSystem (which is in hadoop-common) based 
on the hadoop-client set of jars, it fails with a ClassNotFoundException:

{noformat}
Caused by: java.lang.NoClassDefFoundError: org/jets3t/service/ServiceException
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem.createDefaultStore(NativeS3FileSystem.java:280)
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:270)
at 
com.twitter.twadoop.util.hadoop.NativeS3FileSystemWrapper.initialize(NativeS3FileSystemWrapper.java:34)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2438)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2472)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2454)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:384)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
{noformat}

NativeS3FileSystem depends on a library called jets3t, which is not found in 
the hadoop-client build. It turns out that this library was specifically 
excluded in the hadoop-client pom.xml:

{noformat}
exclusion
  groupIdnet.java.dev.jets3t/groupId
  artifactIdjets3t/artifactId
/exclusion
{noformat}

This strikes me as an issue, as a component that's part of hadoop-common cannot 
run with a hadoop-client build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11148) TestInMemoryNativeS3FileSystemContract fails

2014-11-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reopened HADOOP-11148:


 TestInMemoryNativeS3FileSystemContract fails 
 -

 Key: HADOOP-11148
 URL: https://issues.apache.org/jira/browse/HADOOP-11148
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Rajat Jain
Priority: Minor

 Getting these errors. Ran on centos 6.5
 {code}
 testCanonicalName(org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract)
   Time elapsed: 0.389 sec   ERROR!
 java.lang.IllegalArgumentException: java.net.UnknownHostException: null
   at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
   at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
   at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:304)
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystemContractBaseTest.testCanonicalName(NativeS3FileSystemContractBaseTest.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: java.net.UnknownHostException: null
   at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
   at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
   at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:304)
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystemContractBaseTest.testCanonicalName(NativeS3FileSystemContractBaseTest.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 testListStatusForRoot(org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract)
   Time elapsed: 0.084 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 

[jira] [Resolved] (HADOOP-11148) TestInMemoryNativeS3FileSystemContract fails

2014-11-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-11148.

Resolution: Not a Problem

Resolving as 'Not a Problem'

 TestInMemoryNativeS3FileSystemContract fails 
 -

 Key: HADOOP-11148
 URL: https://issues.apache.org/jira/browse/HADOOP-11148
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Rajat Jain
Priority: Minor

 Getting these errors. Ran on centos 6.5
 {code}
 testCanonicalName(org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract)
   Time elapsed: 0.389 sec   ERROR!
 java.lang.IllegalArgumentException: java.net.UnknownHostException: null
   at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
   at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
   at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:304)
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystemContractBaseTest.testCanonicalName(NativeS3FileSystemContractBaseTest.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: java.net.UnknownHostException: null
   at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
   at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
   at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:304)
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystemContractBaseTest.testCanonicalName(NativeS3FileSystemContractBaseTest.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 testListStatusForRoot(org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract)
   Time elapsed: 0.084 sec   ERROR!
 java.lang.NullPointerException: null
   at