[jira] [Created] (HDFS-8418) Fix the isNeededReplication calculation for Striped block in NN

2015-05-17 Thread Yi Liu (JIRA)
Yi Liu created HDFS-8418:


 Summary: Fix the isNeededReplication calculation for Striped block 
in NN
 Key: HDFS-8418
 URL: https://issues.apache.org/jira/browse/HDFS-8418
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Critical


Currently when calculating {{isNeededReplication}} for striped block, we use 
BlockCollection#getPreferredBlockReplication to get expected replica number for 
striped block. See an example:
{code}
public void checkReplication(BlockCollection bc) {
final short expected = bc.getPreferredBlockReplication();
for (BlockInfo block : bc.getBlocks()) {
  final NumberReplicas n = countNodes(block);
  if (isNeededReplication(block, expected, n.liveReplicas())) { 
neededReplications.add(block, n.liveReplicas(),
n.decommissionedAndDecommissioning(), expected);
  } else if (n.liveReplicas() > expected) {
processOverReplicatedBlock(block, expected, null, null);
  }
}
  }
{code}
But actually it's not correct, for example, if the length of striped file is 
less than a cell, then the expected replica of the block should be {{1 + 
parityBlkNum}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8417) Erasure Coding: Pread failed to read data starting from last incomplete stripe

2015-05-17 Thread Walter Su (JIRA)
Walter Su created HDFS-8417:
---

 Summary: Erasure Coding: Pread failed to read data starting from 
last incomplete stripe
 Key: HDFS-8417
 URL: https://issues.apache.org/jira/browse/HDFS-8417
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


When file length is cellSize * dataBlocks +123, and Pread start from cellSize * 
dataBlocks + 1, it will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #188

2015-05-17 Thread Apache Jenkins Server
See 

Changes:

[arp] HDFS-8157. Writes to RAM DISK reserve locked memory for block files. 
(Arpit Agarwal)

[aajisaka] HADOOP-11988. Fix typo in the document for hadoop fs -find. 
Contributed by Kengo Seki.

--
[...truncated 7278 lines...]
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.232 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.455 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.454 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.291 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.22 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in 
org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.745 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.023 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.124 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.298 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.026 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.912 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0,

Hadoop-Hdfs-trunk-Java8 - Build # 188 - Still Failing

2015-05-17 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/188/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7471 lines...]
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [ 57.624 s]
[INFO] Apache Hadoop HDFS  FAILURE [  02:49 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.063 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:50 h
[INFO] Finished at: 2015-05-17T14:24:31+00:00
[INFO] Final Memory: 52M/175M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797613 bytes
Compression is 0.0%
Took 24 sec
Recording test results
Updating HDFS-8157
Updating HADOOP-11988
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
5 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals 
to persistent storage due to No journals available to flush. Unsynced 
transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:631)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1298)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1240)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1716)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
 at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
 at sun.reflect.

Hadoop-Hdfs-trunk - Build # 2128 - Still Failing

2015-05-17 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2128/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8062 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [ 46.224 s]
[INFO] Apache Hadoop HDFS  FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.053 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-17T14:19:46+00:00
[INFO] Final Memory: 67M/695M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project 
hadoop-hdfs: An Ant BuildException has occured: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs
 does not exist.
[ERROR] around Ant part ..
 @ 5:121 in 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362788 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HDFS-8157
Updating HADOOP-11988
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #2128

2015-05-17 Thread Apache Jenkins Server
See 

Changes:

[arp] HDFS-8157. Writes to RAM DISK reserve locked memory for block files. 
(Arpit Agarwal)

[aajisaka] HADOOP-11988. Fix typo in the document for hadoop fs -find. 
Contributed by Kengo Seki.

--
[...truncated 7869 lines...]
 [exec] 2015-05-17 14:17:24,109 INFO  http.HttpServer2 
(HttpServer2.java:addGlobalFilter(678)) - Added global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
 [exec] 2015-05-17 14:17:24,110 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(653)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context datanode
 [exec] 2015-05-17 14:17:24,111 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(661)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
 [exec] 2015-05-17 14:17:24,113 INFO  http.HttpServer2 
(HttpServer2.java:openListeners(883)) - Jetty bound to port 43065
 [exec] 2015-05-17 14:17:24,113 INFO  mortbay.log (Slf4jLog.java:info(67)) 
- jetty-6.1.26
 [exec] 2015-05-17 14:17:24,168 INFO  mortbay.log (Slf4jLog.java:info(67)) 
- Started SelectChannelConnector@localhost:43065
 [exec] 2015-05-17 14:17:24,294 INFO  web.DatanodeHttpServer 
(DatanodeHttpServer.java:start(150)) - Listening HTTP traffic on 
/127.0.0.1:36594
 [exec] 2015-05-17 14:17:24,296 INFO  datanode.DataNode 
(DataNode.java:startDataNode(1144)) - dnUserName = jenkins
 [exec] 2015-05-17 14:17:24,296 INFO  datanode.DataNode 
(DataNode.java:startDataNode(1145)) - supergroup = supergroup
 [exec] 2015-05-17 14:17:24,309 INFO  ipc.CallQueueManager 
(CallQueueManager.java:(56)) - Using callQueue class 
java.util.concurrent.LinkedBlockingQueue
 [exec] 2015-05-17 14:17:24,310 INFO  ipc.Server (Server.java:run(622)) - 
Starting Socket Reader #1 for port 41644
 [exec] 2015-05-17 14:17:24,317 INFO  datanode.DataNode 
(DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:41644
 [exec] 2015-05-17 14:17:24,329 INFO  datanode.DataNode 
(BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for 
nameservices: null
 [exec] 2015-05-17 14:17:24,331 INFO  datanode.DataNode 
(BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for 
nameservices: 
 [exec] 2015-05-17 14:17:24,341 INFO  datanode.DataNode 
(BPServiceActor.java:run(791)) - Block pool  (Datanode Uuid 
unassigned) service to localhost/127.0.0.1:52051 starting to offer service
 [exec] 2015-05-17 14:17:24,347 INFO  ipc.Server (Server.java:run(852)) - 
IPC Server Responder: starting
 [exec] 2015-05-17 14:17:24,348 INFO  ipc.Server (Server.java:run(692)) - 
IPC Server listener on 41644: starting
 [exec] 2015-05-17 14:17:24,574 INFO  common.Storage 
(Storage.java:tryLock(715)) - Lock on 

 acquired by nodename 32...@asf909.gq1.ygridcore.net
 [exec] 2015-05-17 14:17:24,574 INFO  common.Storage 
(DataStorage.java:loadStorageDirectory(272)) - Storage directory 

 is not formatted for BP-1054975985-67.195.81.153-1431872242456
 [exec] 2015-05-17 14:17:24,574 INFO  common.Storage 
(DataStorage.java:loadStorageDirectory(274)) - Formatting ...
 [exec] 2015-05-17 14:17:24,614 INFO  common.Storage 
(BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage 
directories for bpid BP-1054975985-67.195.81.153-1431872242456
 [exec] 2015-05-17 14:17:24,614 INFO  common.Storage 
(Storage.java:lock(675)) - Locking is disabled for 

 [exec] 2015-05-17 14:17:24,615 INFO  common.Storage 
(BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage 
directory 

 is not formatted for BP-1054975985-67.195.81.153-1431872242456
 [exec] 2015-05-17 14:17:24,615 INFO  common.Storage 
(BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
 [exec] 2015-05-17 14:17:24,615 INFO  common.Storage 
(BlockPoolSliceStorage.java:format(267)) - Formatting block pool 
BP-1054975985-67.195.81.153-1431872242456 directory 

 [exec] 2015-05-1