Jenkins build became unstable: Hadoop-Hdfs-0.23-Build #690

2013-08-05 Thread Apache Jenkins Server
See 



Hadoop-Hdfs-0.23-Build - Build # 690 - Unstable

2013-08-05 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/690/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11848 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.10-SNAPSHOT/hadoop-hdfs-project-0.23.10-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-project ---
[INFO] No dependencies found.
[INFO] Skipped writing classpath file 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/classes/mrapp-generated-classpath'.
  No changes found.
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [4:55.526s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [48.048s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.063s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 5:44.228s
[INFO] Finished at: Mon Aug 05 11:38:50 UTC 2013
[INFO] Final Memory: 51M/736M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy1

Error Message:
Timed out waiting for corrupt replicas. Waiting for 1, but only found 0

Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for corrupt replicas. 
Waiting for 1, but only found 0
at 
org.apache.hadoop.hdfs.DFSTestUtil.waitCorruptReplicas(DFSTestUtil.java:330)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.blockCorruptionRecoveryPolicy(TestDatanodeBlockScanner.java:288)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.__CLR3_0_2wadu2t10ie(TestDatanodeBlockScanner.java:236)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy1(TestDatanodeBlockScanner.java:233)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at ju

Re: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-08-05 Thread Tsuyoshi OZAWA
+1 (non-binding)

* verified md5 of the source code.
* build the package from source code.
* run basic hdfs commands.
* run MapReduce example jobs(random text writer, wordcount, and pi)
over YARN in single node.

- Tsuyoshi

On Tue, Jul 30, 2013 at 10:29 PM, Arun C Murthy  wrote:
> Folks,
>
> I've created another release candidate (rc1) for hadoop-2.1.0-beta that I 
> would like to get released. This RC fixes a number of issues reported on the 
> previous candidate.
>
> This release represents a *huge* amount of work done by the community (~650 
> fixes) which includes several major advances including:
> # HDFS Snapshots
> # Windows support
> # YARN API stabilization
> # MapReduce Binary Compatibility with hadoop-1.x
> # Substantial amount of integration testing with rest of projects in the 
> ecosystem
>
> The RC is available at: 
> http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc1/
> The RC tag in svn is here: 
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc1
>
> The maven artifacts are available via repository.apache.org.
>
> Please try the release and vote; the vote will run for the usual 7 days.
>
> thanks,
> Arun
>
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/
>
>


Build failed in Jenkins: Hadoop-Hdfs-trunk #1482

2013-08-05 Thread Apache Jenkins Server
See 

--
[...truncated 15216 lines...]
[INFO] 
[INFO] --- maven-shade-plugin:1.5:shade (default) @ hadoop-hdfs-bkjournal ---
[INFO] Excluding commons-logging:commons-logging:jar:1.1.1 from the shaded jar.
[INFO] Excluding commons-cli:commons-cli:jar:1.2 from the shaded jar.
[INFO] Excluding log4j:log4j:jar:1.2.17 from the shaded jar.
[INFO] Excluding commons-lang:commons-lang:jar:2.5 from the shaded jar.
[INFO] Excluding commons-configuration:commons-configuration:jar:1.6 from the 
shaded jar.
[INFO] Excluding commons-collections:commons-collections:jar:3.2.1 from the 
shaded jar.
[INFO] Excluding commons-digester:commons-digester:jar:1.8 from the shaded jar.
[INFO] Excluding commons-beanutils:commons-beanutils:jar:1.7.0 from the shaded 
jar.
[INFO] Excluding commons-beanutils:commons-beanutils-core:jar:1.8.0 from the 
shaded jar.
[INFO] Excluding org.slf4j:slf4j-api:jar:1.6.1 from the shaded jar.
[INFO] Excluding org.slf4j:slf4j-log4j12:jar:1.6.1 from the shaded jar.
[INFO] Including org.apache.bookkeeper:bookkeeper-server:jar:4.0.0 in the 
shaded jar.
[INFO] Including org.jboss.netty:netty:jar:3.2.4.Final in the shaded jar.
[INFO] Including org.apache.zookeeper:zookeeper:jar:3.4.2 in the shaded jar.
[INFO] Excluding jline:jline:jar:0.9.94 from the shaded jar.
[INFO] Excluding com.google.guava:guava:jar:11.0.2 from the shaded jar.
[INFO] Excluding com.google.code.findbugs:jsr305:jar:1.3.9 from the shaded jar.
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing 

 with 

[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-bkjournal ---
[INFO] 
[INFO] There are 324 checkstyle errors.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-bkjournal ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is true
[INFO] ** FindBugsMojo executeFindbugs ***
[INFO] Temp File is 

[INFO] Fork Value is true
[INFO] xmlOutput is false
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 7 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ hadoop-hdfs-nfs 
---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.nfs.nfs3.TestOffsetRange
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.057 sec
Running org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Te

Hadoop-Hdfs-trunk - Build # 1482 - Still Failing

2013-08-05 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1482/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 15409 lines...]
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, 
no dependency information available
[WARNING] Failed to retrieve plugin descriptor for 
org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin 
org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be 
resolved: Failed to read artifact descriptor for 
org.eclipse.m2e:lifecycle-mapping:jar:1.0.0
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:41:04.797s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [2:31.661s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [1:21.824s]
[INFO] Apache Hadoop HDFS-NFS  FAILURE [25.524s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.037s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:45:24.684s
[INFO] Finished at: Mon Aug 05 13:18:51 UTC 2013
[INFO] Final Memory: 54M/822M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.6:checkstyle (default-cli) 
on project hadoop-hdfs-nfs: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to find configuration 
file at location 
file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml:
 Could not find resource 
'file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml'.
 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-nfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

[jira] [Created] (HDFS-5066) Inode tree with snapshot information visualization

2013-08-05 Thread Binglin Chang (JIRA)
Binglin Chang created HDFS-5066:
---

 Summary: Inode tree with snapshot information visualization 
 Key: HDFS-5066
 URL: https://issues.apache.org/jira/browse/HDFS-5066
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Priority: Minor


I would be nice to be able to visualize snapshot information, in order to ease 
the understanding of related data structures. We can generate graph from in 
memory inode links.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5057) NameSystem.addStoredBlock: addStoredBlock request received for blk_-8546297170266610178_1223147 on 192.168.10.44:40010 size 134217728 but was rejected: Block not in block

2013-08-05 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-5057.
--

Resolution: Invalid

> NameSystem.addStoredBlock: addStoredBlock request received for 
> blk_-8546297170266610178_1223147 on 192.168.10.44:40010 size 134217728 but 
> was rejected: Block not in blockMap with any generation stamp
> ---
>
> Key: HDFS-5057
> URL: https://issues.apache.org/jira/browse/HDFS-5057
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chojuil
>
> In some cases, the following symptoms occur?
> ---
> 2013-08-02 00:09:16,426 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: addStoredBlock request received for 
> blk_-8546297170266610178_1223147 on 192.168.10.44:40010 size 134217728 but 
> was rejected: Block not in blockMap with any generation stamp
> 2013-08-02 00:09:16,426 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addToInvalidates: blk_-8546297170266610178 to 192.168.10.44:40010
> 2013-08-02 00:09:16,429 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: addStoredBlock request received for 
> blk_-8546297170266610178_1223147 on 192.168.10.23:40010 size 134217728 but 
> was rejected: Block not in blockMap with any generation stamp
> 2013-08-02 00:09:16,429 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addToInvalidates: blk_-8546297170266610178 to 192.168.10.23:40010
> 2013-08-02 00:09:16,468 ERROR 
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException 
> as:hdfsuser 
> cause:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease 
> on /hdfsroot/20130802/1214110.0 File does not exist. [Lease.  Holder: 
> DFSClient_1545724836, pendingcreates: 4]
> 2013-08-02 00:09:16,468 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 4, call addBlock(/hdfsroot/20130802/1214110.0, DFSClient_1545724836, 
> null) from 192.168.10.23:60071: error: 
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on 
> /hdfsroot/20130802/1214110.0 File does not exist. [Lease.  Holder: 
> DFSClient_1545724836, pendingcreates: 4]
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on 
> /hdfsroot/20130802/1214110.0 File does not exist. [Lease.  Holder: 
> DFSClient_1545724836, pendingcreates: 4]
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1720)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1711)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1619)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>   
>   
>   
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5067) Support symlink operations

2013-08-05 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5067:


 Summary: Support symlink operations
 Key: HDFS-5067
 URL: https://issues.apache.org/jira/browse/HDFS-5067
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li


Given the symlink issues(e.g., HDFS-4765) are getting fixed. NFS can support 
the symlinke related requests, which includes NFSv3 calls SYMLINK and READLINK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5068) Convert NNThroughputBenchmark to a Tool to allow generic options.

2013-08-05 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-5068:
-

 Summary: Convert NNThroughputBenchmark to a Tool to allow generic 
options.
 Key: HDFS-5068
 URL: https://issues.apache.org/jira/browse/HDFS-5068
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: benchmarks
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


Currently NNThroughputBenchmark does not recognize generic options like -conf, 
etc. A simple way to enable such functionality is to make it implement Tool 
interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5069) Inculde hadoop-nfs jar file into hadoop-common tar ball, and hdfs-nfs into hadoop-hdfs tar file for easier NFS deployment

2013-08-05 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5069:


 Summary: Inculde hadoop-nfs jar file into hadoop-common tar ball, 
and hdfs-nfs into hadoop-hdfs tar file for easier NFS deployment
 Key: HDFS-5069
 URL: https://issues.apache.org/jira/browse/HDFS-5069
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5070) Do not initialize the replications queues in the middle of block report processing

2013-08-05 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-5070:


 Summary: Do not initialize the replications queues in the middle 
of block report processing
 Key: HDFS-5070
 URL: https://issues.apache.org/jira/browse/HDFS-5070
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.9, 2.1.0-beta
Reporter: Kihwal Lee


While processing an initial block report in the start-up safe mode, namenode 
can reach the safe block threshold in the middle of processing the report. This 
is noticed when checkMode() is called and it causes the replication queues to 
be initialized. 

The safe mode monitor will try to check and leave the safe mode, but can be far 
behind the write lock, if the initialization takes long (e.g. large number of 
blocks) and more block reports come in and get queued before it.  In this state 
(replication queue initialized but still in startup safe mode), block report 
processing can take a long time. In one instance, 4 block report processing 
took 13 minutes.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5071) HDFS-NFS build failure fails.

2013-08-05 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-5071:


 Summary: HDFS-NFS build failure fails.
 Key: HDFS-5071
 URL: https://issues.apache.org/jira/browse/HDFS-5071
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


https://builds.apache.org/job/Hadoop-Hdfs-trunk/

HDFS trunk build has been failing for a while due to failures in HDFS-NFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4963) Improve multihoming support in namenode

2013-08-05 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-4963.
-

  Resolution: Fixed
   Fix Version/s: 1.3.0
  1-win
Target Version/s: 1-win, 1.3.0
Hadoop Flags: Reviewed

+1 for the patch.  Nice work, Arpit!  I've committed this to branch-1 and 
branch-1-win.

> Improve multihoming support in namenode
> ---
>
> Key: HDFS-4963
> URL: https://issues.apache.org/jira/browse/HDFS-4963
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 1.3.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 1-win, 1.3.0
>
> Attachments: HDFS-4963.branch-1.001.patch, 
> HDFS-4963.branch-1.002.patch, HDFS-4963.branch-1.patch
>
>
> HDFS does not work very well on multi-homed machines. A few open Jiras refer 
> to this:
> # HDFS-1379
> # HADOOP-8198
> There are multiple issues involved here and some of them can be worked around 
> by using alternate DNS names and configuring {{slave.host.name}} on Datanodes 
> and task trackers. 
> However namenode issues cannot be worked around because it does not respect 
> the {{fs.default.name}} configuration. e.g. {{Namenode#initialize}} performs 
> a gratuitous reverse DNS lookup to regenerate the hostname. Similar issues 
> exist elsewhere.
> This Jira is being filed to fix some of the more serious problems. To avoid 
> affecting existing users a new config setting may be introduced.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4747) Convert snapshot user guide to APT from XDOC

2013-08-05 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HDFS-4747.
---

Resolution: Won't Fix

Marking this as won't fix. Reopen if necessary. 

> Convert snapshot user guide to APT from XDOC
> 
>
> Key: HDFS-4747
> URL: https://issues.apache.org/jira/browse/HDFS-4747
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: Snapshot (HDFS-2802)
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-4747.patch, HdfsSnapshots.html
>
>
> To be consistent with the rest of the HDFS docs, the snapshots user guide 
> should use APT instead of XDOC.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira