Re: Exception with QJM HDFS HA

2013-03-31 Thread Suresh Srinivas
This does seem like inode id change related. I will follow up on HDFS-4654. 

Sent from a mobile device

On Mar 31, 2013, at 10:12 PM, Harsh J  wrote:

> A JIRA was posted by Azuryy for this at
> https://issues.apache.org/jira/browse/HDFS-4654.
> 
> On Mon, Apr 1, 2013 at 10:40 AM, Todd Lipcon  wrote:
>> This looks like a bug with the new inode ID code in trunk, rather than a
>> bug with QJM or HA.
>> 
>> Suresh/Brandon, any thoughts?
>> 
>> -Todd
>> 
>> On Sun, Mar 31, 2013 at 6:43 PM, Azuryy Yu  wrote:
>> 
>>> Hi All,
>>> 
>>> I configured HDFS Ha using source code from trunk r1463074.
>>> 
>>> I got an exception as follows when I put a file to the HDFS.
>>> 
>>> 13/04/01 09:33:45 WARN retry.RetryInvocationHandler: Exception while
>>> invoking addBlock of class ClientNamenodeProtocolTranslatorPB. Trying to
>>> fail over immediately.
>>> 13/04/01 09:33:45 WARN hdfs.DFSClient: DataStreamer Exception
>>> java.io.FileNotFoundException: ID mismatch. Request id and saved id: 1073 ,
>>> 1050
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:51)
>>>at
>>> 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2501)
>>>at
>>> 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2298)
>>>at
>>> 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2212)
>>>at
>>> 
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:498)
>>>at
>>> 
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
>>>at
>>> 
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40979)
>>>at
>>> 
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:526)
>>>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1018)
>>>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1818)
>>>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1814)
>>>at java.security.AccessController.doPrivileged(Native Method)
>>>at javax.security.auth.Subject.doAs(Subject.java:415)
>>>at
>>> 
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
>>>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1812)
>>> 
>>> 
>>> please reproduce as :
>>> 
>>> hdfs dfs -put test.data  /user/data/test.data
>>> after this command start to run, then kill active name node process.
>>> 
>>> 
>>> I have only three nodes(A,B,C) for test
>>> A and B are name nodes.
>>> B and C are data nodes.
>>> ZK deployed on A, B and C.
>>> 
>>> A, B and C are all journal nodes.
>>> 
>>> Thanks.
>> 
>> 
>> 
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
> 
> 
> 
> -- 
> Harsh J


Re: Exception with QJM HDFS HA

2013-03-31 Thread Harsh J
A JIRA was posted by Azuryy for this at
https://issues.apache.org/jira/browse/HDFS-4654.

On Mon, Apr 1, 2013 at 10:40 AM, Todd Lipcon  wrote:
> This looks like a bug with the new inode ID code in trunk, rather than a
> bug with QJM or HA.
>
> Suresh/Brandon, any thoughts?
>
> -Todd
>
> On Sun, Mar 31, 2013 at 6:43 PM, Azuryy Yu  wrote:
>
>> Hi All,
>>
>> I configured HDFS Ha using source code from trunk r1463074.
>>
>> I got an exception as follows when I put a file to the HDFS.
>>
>> 13/04/01 09:33:45 WARN retry.RetryInvocationHandler: Exception while
>> invoking addBlock of class ClientNamenodeProtocolTranslatorPB. Trying to
>> fail over immediately.
>> 13/04/01 09:33:45 WARN hdfs.DFSClient: DataStreamer Exception
>> java.io.FileNotFoundException: ID mismatch. Request id and saved id: 1073 ,
>> 1050
>> at
>> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:51)
>> at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2501)
>> at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2298)
>> at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2212)
>> at
>>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:498)
>> at
>>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
>> at
>>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40979)
>> at
>>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:526)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1018)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1818)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1814)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1812)
>>
>>
>> please reproduce as :
>>
>> hdfs dfs -put test.data  /user/data/test.data
>> after this command start to run, then kill active name node process.
>>
>>
>> I have only three nodes(A,B,C) for test
>> A and B are name nodes.
>> B and C are data nodes.
>> ZK deployed on A, B and C.
>>
>> A, B and C are all journal nodes.
>>
>> Thanks.
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera



-- 
Harsh J


Re: Exception with QJM HDFS HA

2013-03-31 Thread Todd Lipcon
This looks like a bug with the new inode ID code in trunk, rather than a
bug with QJM or HA.

Suresh/Brandon, any thoughts?

-Todd

On Sun, Mar 31, 2013 at 6:43 PM, Azuryy Yu  wrote:

> Hi All,
>
> I configured HDFS Ha using source code from trunk r1463074.
>
> I got an exception as follows when I put a file to the HDFS.
>
> 13/04/01 09:33:45 WARN retry.RetryInvocationHandler: Exception while
> invoking addBlock of class ClientNamenodeProtocolTranslatorPB. Trying to
> fail over immediately.
> 13/04/01 09:33:45 WARN hdfs.DFSClient: DataStreamer Exception
> java.io.FileNotFoundException: ID mismatch. Request id and saved id: 1073 ,
> 1050
> at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:51)
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2501)
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2298)
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2212)
> at
>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:498)
> at
>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
> at
>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40979)
> at
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:526)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1018)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1818)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1814)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1812)
>
>
> please reproduce as :
>
> hdfs dfs -put test.data  /user/data/test.data
> after this command start to run, then kill active name node process.
>
>
> I have only three nodes(A,B,C) for test
> A and B are name nodes.
> B and C are data nodes.
> ZK deployed on A, B and C.
>
> A, B and C are all journal nodes.
>
> Thanks.
>



-- 
Todd Lipcon
Software Engineer, Cloudera


[jira] [Created] (HDFS-4654) FileNotFoundException: ID mismatch

2013-03-31 Thread Fengdong Yu (JIRA)
Fengdong Yu created HDFS-4654:
-

 Summary: FileNotFoundException: ID mismatch
 Key: HDFS-4654
 URL: https://issues.apache.org/jira/browse/HDFS-4654
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, namenode
Affects Versions: 3.0.0
Reporter: Fengdong Yu
 Fix For: 3.0.0


Mu cluster was build from source code trunk r1463074.

I got an exception as follows when I put a file to the HDFS.

13/04/01 09:33:45 WARN retry.RetryInvocationHandler: Exception while invoking 
addBlock of class ClientNamenodeProtocolTranslatorPB. Trying to fail over 
immediately.
13/04/01 09:33:45 WARN hdfs.DFSClient: DataStreamer Exception
java.io.FileNotFoundException: ID mismatch. Request id and saved id: 1073 , 1050
at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:51)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2501)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2298)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2212)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:498)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40979)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:526)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1018)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1818)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1814)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1812)


please reproduce as :

hdfs dfs -put test.data  /user/data/test.data
after this command start to run, then kill active name node process.


I have only three nodes(A,B,C) for test
A and B are name nodes.
B and C are data nodes.
ZK deployed on A, B and C.

A, B and C are all journal nodes.

Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4637) INodeDirectory#replaceSelf4Quota may convert a newly created directory (which is not included in the latest snapshot) to an INodeDirectoryWithSnapshot

2013-03-31 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4637.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)

I have committed this.  Thanks, Jing!

> INodeDirectory#replaceSelf4Quota may convert a newly created directory (which 
> is not included in the latest snapshot) to an INodeDirectoryWithSnapshot
> --
>
> Key: HDFS-4637
> URL: https://issues.apache.org/jira/browse/HDFS-4637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4637.001.patch, HDFS-4637.002.patch
>
>
> In INodeDirectory#replaceSelf4Quota, we convert the target node to 
> INodeDirectoryWithSnapshot when the latest snapshot is not null. This may 
> convert a directory, which was created after taking the latest snapshot, to 
> an INodeDirectoryWithSnapshot. We thus should use INode#isInLatestSnapshot 
> for checking here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Hdfs-trunk #1360

2013-03-31 Thread Apache Jenkins Server
See 

--
[...truncated 10069 lines...]
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.164 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.569 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.894 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.477 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.132 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.849 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.963 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.366 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.516 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.306 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.83 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.479 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.063 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.254 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.984 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.108 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.774 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.075 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 10.686 sec <<< 
FAILURE!
testCreateWithPartQualPathFails(org.apache.hadoop.fs.TestFcHdfsSymlink)  Time 
elapsed: 39 sec  <<< FAILURE!
java.lang.AssertionError: HDFS requires URIs with schemes have an authority
at org.junit.Assert.fail(Assert.java:91)
at 
org.apache.hadoop.fs.TestFcHdfsSymlink.testCreateWithPartQualPathFails(TestFcHdfsSymlink.java:240)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.Paren

Hadoop-Hdfs-trunk - Build # 1360 - Still Failing

2013-03-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1360/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10262 lines...]
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.155 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 62, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.365 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Failed tests:   
testCreateWithPartQualPathFails(org.apache.hadoop.fs.TestFcHdfsSymlink): HDFS 
requires URIs with schemes have an authority
  testCreateLinkUsingRelPaths(org.apache.hadoop.fs.TestFcHdfsSymlink)
  testCreateLinkUsingAbsPaths(org.apache.hadoop.fs.TestFcHdfsSymlink)
  testCreateLinkUsingFullyQualPaths(org.apache.hadoop.fs.TestFcHdfsSymlink)

Tests run: 1735, Failures: 4, Errors: 0, Skipped: 6

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:20:36.225s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:20:37.005s
[INFO] Finished at: Sun Mar 31 12:53:50 UTC 2013
[INFO] Final Memory: 24M/554M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.