[jira] [Created] (HDFS-4784) NPE in FSDirectory.resolvePath()

2013-04-30 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4784:


 Summary: NPE in FSDirectory.resolvePath()
 Key: HDFS-4784
 URL: https://issues.apache.org/jira/browse/HDFS-4784
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li


NN can get NPE when resoling an inode id path for a nonexistent file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4783) TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens fails on Windows

2013-04-30 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4783:
---

 Summary: 
TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens fails on Windows
 Key: HDFS-4783
 URL: https://issues.apache.org/jira/browse/HDFS-4783
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.0.5-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth


This test asserts that delegation tokens previously associated to a host with a 
resolved IP address no longer match for selection when 
hadoop.security.token.service.use_ip is set false.  The test assumes that 
127.0.0.1 resolves to host name "localhost".  On Windows, this is not the case, 
and instead it resolves to "127.0.0.1".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Heads up - 2.0.5-beta

2013-04-30 Thread Konstantin Shvachko
Hi Arun,

I am agnostic about version numbers too, as long as the count goes up.
The discussion you are referring to is somewhat outdated, it was talking
about 2.0.4-beta, which we already passed. It is talking about producing a
series "not suitable for general consumption", which isn't correct for the
latest release 2.0.4. That discussion clearly outlined general (or
specific) frustration about breaking compatibility from top level projects.

You are not listing new features for MR and YARN.
So it will only be about the four HDFS features Suresh proposed for 2.0.5.
As I said earlier my problem with them is that each is big enough to
destabilize the code base, and big enough to be targeted for a separate
release. The latter relates to the "streamlining" thread on general@.
I also think the proposed features will delay stable 2.x beyond the
time-frame you projected, because some of them are not implemented yet, and
Windows is in unknown to me condition, as integration builds are still not
run for it.

If the next release has to be 2.0.5 I would like to make an alternative
proposal, which would include
- stabilization of current 2.0.4
- making all API changes to allow freezing them post 2.0.5
And nothing else.

We can add new features in subsequent release (release). Potentially we can
end up in the same place as you proposed but with more certainty along the
road.
The main reason I am asking for stabilization is to make it available for
large installations such as Yahoo sooner. And this will require commitment
to compatibility as Bobby mentioned on several occasions.

As a rule of thumb compatibility for me means that I can do a rolling
upgrade on the cluster. More formal definitions like Karthik's
Compatibility page are better. BigTop's integration testing proved to be
very productive.

Thanks,
--Konstantin


On Fri, Apr 26, 2013 at 6:06 PM, Arun C Murthy  wrote:

> Konstantin,
>
> On Apr 26, 2013, at 4:34 PM, Konstantin Shvachko wrote:
>
> > Do you think we can call the version you proposed to release
> > 2.1.0 or 2.1.0-beta?
> >
> > The proposed new features imho do not exactly conform with the idea
> > of dot-dot release, but definitely qualify for a major number change.
> > I am just trying to avoid rather ugly 2.0.4.1 versions, which of course
> > also possible.
>
> I'm agnostic to the schemes.
>
> During the long discussion we had just 2 months ago, I proposed that 2.1.x
> be the beta series initially.
>
> The feedback and consensus was that it wasn't the right numbering scheme:
> http://s.apache.org/1j4
>
> thanks,
> Arun
>


[jira] [Resolved] (HDFS-4758) Disallow nested snapshottable directories

2013-04-30 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4758.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)
 Hadoop Flags: Reviewed

Thanks Jing for reviewing the patch.

I have committed this.

> Disallow nested snapshottable directories
> -
>
> Key: HDFS-4758
> URL: https://issues.apache.org/jira/browse/HDFS-4758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: h4758_20140426.patch, h4758_20140429b.patch, 
> h4758_20140429.patch, h4758_20140430.patch
>
>
> Nested snapshottable directories are supported by the current implementation. 
>  However, it seems that there are no good use cases for nested snapshottable 
> directories.  So we disable it for now until someone has a valid use case for 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4760) Update inodeMap after node replacement

2013-04-30 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4760.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)

I have committed this.  Thanks, Jing!

> Update inodeMap after node replacement
> --
>
> Key: HDFS-4760
> URL: https://issues.apache.org/jira/browse/HDFS-4760
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4760.001.patch, HDFS-4760.002.patch, 
> HDFS-4760.003.patch, HDFS-4760.004.patch, HDFS-4760.005.patch
>
>
> Similar with HDFS-4757, we need to update the inodeMap after node 
> replacement. Because a lot of node replacement happens in the snapshot branch 
> (e.g., INodeDirectory => INodeDirectoryWithSnapshot, INodeDirectory <=> 
> INodeDirectorySnapshottable, INodeFile => INodeFileWithSnapshot ...), this 
> becomes a non-trivial issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4782) backport edit log toleration to branch-1-win

2013-04-30 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4782:
---

 Summary: backport edit log toleration to branch-1-win
 Key: HDFS-4782
 URL: https://issues.apache.org/jira/browse/HDFS-4782
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth


HDFS-3521 made changes to management of the edits log to prevent certain cases 
of corruption.  This issue tracks backporting those changes to branch-1-win.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4781) File listing of .snapshot under a non-existing dir throws NullPointer

2013-04-30 Thread Ramya Sunil (JIRA)
Ramya Sunil created HDFS-4781:
-

 Summary: File listing of .snapshot under a non-existing dir throws 
NullPointer
 Key: HDFS-4781
 URL: https://issues.apache.org/jira/browse/HDFS-4781
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ramya Sunil
 Fix For: Snapshot (HDFS-2802)


$ hadoop dfs -ls /invalidDir/.snapshot
ls: java.io.IOException: java.lang.NullPointerException

{noformat}
INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 8020, call 
getFileInfo(/invalidDir/.snapshot) from .* : error: java.io.IOException: 
java.lang.NullPointerException
java.io.IOException: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.getFileInfo4DotSnapshot(FSDirectory.java:1208)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.getFileInfo(FSDirectory.java:1189)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2545)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:949)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1405)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1401)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1195)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1399)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4776) Backport SecondaryNameNode web ui to branch-1

2013-04-30 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4776.
--

   Resolution: Fixed
Fix Version/s: 1.2.0
 Hadoop Flags: Reviewed

I have committed this.

Thanks, Chris and Suresh for reviewing this.

> Backport SecondaryNameNode web ui to branch-1
> -
>
> Key: HDFS-4776
> URL: https://issues.apache.org/jira/browse/HDFS-4776
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: h4776_20130429.patch
>
>
> The related JIRAs are
> - HADOOP-3741: SecondaryNameNode has http server on 
> dfs.secondary.http.address but without any contents 
> - HDFS-1728: SecondaryNameNode.checkpointSize is in byte but not MB.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4780) Use the correct relogin method for services

2013-04-30 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-4780:


 Summary: Use the correct relogin method for services
 Key: HDFS-4780
 URL: https://issues.apache.org/jira/browse/HDFS-4780
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.8, 3.0.0, 2.0.5-beta
Reporter: Kihwal Lee


A number of components call reloginFromKeytab() before making requests. For 
StandbyCheckpointer and SecondaryNameNode, where this can be called frequently, 
it generates many WARN messages like this:

WARN security.UserGroupInformation: Not attempting to re-login since the last 
re-login was attempted less than 600 seconds before.

Other than these messages, it doesn't do anything wrong. But it will be nice if 
it is changed to call checkTGTAndReloginFromKeytab() to avoid the potentially 
misleading WARN messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Hdfs-trunk #1388

2013-04-30 Thread Apache Jenkins Server
See 

Changes:

[atm] Move the CHANGES.txt entry for HDFS-4305 to the incompatible changes 
section.

[vinodkv] YARN-599. Refactoring submitApplication in ClientRMService and 
RMAppManager to separate out various validation checks depending on whether 
they rely on RM configuration or not. Contributed by Zhijie Shen.

[harsh] HADOOP-9322. LdapGroupsMapping doesn't seem to set a timeout for its 
directory search. Contributed by Harsh J. (harsh)

[suresh] HDFS-4610. Use common utils FileUtil#setReadable/Writable/Executable 
and FileUtil#canRead/Write/Execute. Contrbitued by Ivan Mitic.

[suresh] YARN-506. Move to common utils 
FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute. 
Contributed by Ivan Mitic.

[suresh] MAPREDUCE-5177. Use common utils 
FileUtil#setReadable/Writable/Executable & FileUtil#canRead/Write/Execute. 
Contributed by Ivan Mitic.

[suresh] HDFS-4610. Reverting the patch Jenkins build is not run.

[suresh] HDFS-4610. Use common utils FileUtil#setReadable/Writable/Executable & 
FileUtil#canRead/Write/Execute. Contributed by Ivan Mitic.

[suresh] HADOOP-9413. Add common utils for File#setReadable/Writable/Executable 
& File#canRead/Write/Execute that work cross-platform. Contributed by Ivan 
Mitic.

[atm] HDFS-4305. Add a configurable limit on number of blocks per file, and min 
block size. Contributed by Andrew Wang.

[atm] HDFS-4687. TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with 
JDK7. Contributed by Andrew Wang.

[atm] HDFS-4733. Make HttpFS username pattern configurable. Contributed by 
Alejandro Abdelnur.

--
[...truncated 10059 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.585 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.597 sec
Running 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.604 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.884 sec
Running org.apache.hadoop.hdfs.server.common.TestGetUriFromString
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec
Running org.apache.hadoop.hdfs.server.common.TestJspHelper
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.104 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.193 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.126 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.687 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.353 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.254 sec
Running org.apache.hadoop.hdfs.server.journalservice.TestJournalService
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.308 sec
Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.879 sec
Running org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeDescriptor
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec
Running org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.19 sec
Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 46.904 sec <<< 
FAILURE!
testReduceReplFactorDueToRejoinRespectsRackPolicy(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
  Time elapsed: 3673 sec  <<< FAILURE!
java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1416)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testReduceReplFactorDueToRejoinRespectsRackPolicy(TestBlocksWithNotEnoughRacks.java:373)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.Reflect

Hadoop-Hdfs-trunk - Build # 1388 - Still Failing

2013-04-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1388/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10252 lines...]
Running org.apache.hadoop.hdfs.TestHftpFileSystem
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.299 sec
Running org.apache.hadoop.hdfs.TestFSInputChecker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.487 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.789 sec
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.565 sec
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.184 sec
Running org.apache.hadoop.tools.TestDelegationTokenFetcher
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.133 sec
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.228 sec

Results :

Failed tests:   
testReduceReplFactorDueToRejoinRespectsRackPolicy(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks):
 Test resulted in an unexpected exit

Tests run: 1791, Failures: 1, Errors: 0, Skipped: 34

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:21:53.339s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:21:54.105s
[INFO] Finished at: Tue Apr 30 12:56:09 UTC 2013
[INFO] Final Memory: 17M/387M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-5177
Updating HDFS-4733
Updating HADOOP-9413
Updating HDFS-4305
Updating HADOOP-9322
Updating YARN-599
Updating HDFS-4687
Updating HDFS-4610
Updating YARN-506
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Jenkins build became unstable: Hadoop-Hdfs-0.23-Build #597

2013-04-30 Thread Apache Jenkins Server
See 



Hadoop-Hdfs-0.23-Build - Build # 597 - Unstable

2013-04-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/597/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10069 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.8-SNAPSHOT/hadoop-hdfs-project-0.23.8-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-project ---
[INFO] No dependencies found.
[INFO] Skipped writing classpath file 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/classes/mrapp-generated-classpath'.
  No changes found.
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:00.358s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [48.910s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.115s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 5:50.005s
[INFO] Finished at: Tue Apr 30 11:39:15 UTC 2013
[INFO] Final Memory: 52M/745M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy2

Error Message:
Timed out waiting for corrupt replicas. Waiting for 2, but only found 0

Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for corrupt replicas. 
Waiting for 2, but only found 0
at 
org.apache.hadoop.hdfs.DFSTestUtil.waitCorruptReplicas(DFSTestUtil.java:330)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.blockCorruptionRecoveryPolicy(TestDatanodeBlockScanner.java:288)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.__CLR3_0_2t1dvac10c7(TestDatanodeBlockScanner.java:242)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy2(TestDatanodeBlockScanner.java:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at juni