[jira] [Updated] (HDFS-11394) Add method for getting erasure coding policy through WebHDFS

2017-05-12 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-11394:
--
Attachment: HDFS-11394.03.patch

If the ec policy is included in system erasure coding policies, only policy 
name is shown through WebHDFS. In case of other policies, {{cellSize}}, 
{{numDataUnits}}, {{numParityUnits}} are also shown. 
We can do add the test as followup after {{ErasureCodingPolicyManager}} can 
load custom policy in addition to system policies.

> Add method for getting erasure coding policy through WebHDFS 
> -
>
> Key: HDFS-11394
> URL: https://issues.apache.org/jira/browse/HDFS-11394
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, namenode
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11394.01.patch, HDFS-11394.02.patch, 
> HDFS-11394.03.patch
>
>
> We can expose erasure coding policy by erasure coded directory through 
> WebHDFS method as well as storage policy. This information can be used by 
> NameNode Web UI and show the detail of erasure coded directories.
> see: HDFS-8196



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11762) [SPS] : Empty files should be ignored in StoragePolicySatisfier.

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16009152#comment-16009152
 ] 

Hadoop QA commented on HDFS-11762:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
32s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
39s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.namenode.TestStoragePolicySatisfier |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11762 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867922/HDFS-11762-HDFS-10285.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f791aa6559f5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 376a2be |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19422/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19422/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19422/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19422/console |
| Powered by | Apache Yetus 0.5.0-S

[jira] [Updated] (HDFS-11762) [SPS] : Empty files should be ignored in StoragePolicySatisfier.

2017-05-12 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-11762:
--
Attachment: HDFS-11762-HDFS-10285.003.patch

Fixed checkstyle warnings.
Attached v3 patch.. 

> [SPS] : Empty files should be ignored in StoragePolicySatisfier. 
> -
>
> Key: HDFS-11762
> URL: https://issues.apache.org/jira/browse/HDFS-11762
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11762-HDFS-10285.001.patch, 
> HDFS-11762-HDFS-10285.002.patch, HDFS-11762-HDFS-10285.003.patch
>
>
> File which has zero block should be ignored in SPS. Currently it is throwing 
> NPE in StoragePolicySatisfier thread.
> {noformat}
> 2017-05-06 23:29:04,735 [StoragePolicySatisfier] ERROR 
> namenode.StoragePolicySatisfier (StoragePolicySatisfier.java:run(278)) - 
> StoragePolicySatisfier thread received runtime exception. Stopping Storage 
> policy satisfier work
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.analyseBlocksStorageMovementsAndAssignToDN(StoragePolicySatisfier.java:292)
>   at 
> org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.run(StoragePolicySatisfier.java:233)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11819) HDFS client with hedged read, handle exceptions from callable when the hedged read thread pool is exhausted

2017-05-12 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008937#comment-16008937
 ] 

huaxiang sun commented on HDFS-11819:
-

Thanks Andrew!

> HDFS client with hedged read, handle exceptions from callable  when the 
> hedged read thread pool is exhausted
> 
>
> Key: HDFS-11819
> URL: https://issues.apache.org/jira/browse/HDFS-11819
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> When the hedged read thread pool is exhausted, the current behavior is that 
> callable will be executed in the current thread context. The callable can 
> throw out IOExceptions which is not handled and it will not start a 'hedged' 
> read. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1131
> Please see the following exception:
> {code}
> 2017-05-11 22:42:35,883 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O 
> error constructing remote block reader.
> org.apache.hadoop.net.ConnectTimeoutException: 3000 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/*.*.*.*:50010]
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
> at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3527)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:840)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:755)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1179)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.access$300(DFSInputStream.java:91)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1141)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1133)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(ThreadPoolExecutor.java:2022)
> at 
> org.apache.hadoop.hdfs.DFSClient$2.rejectedExecution(DFSClient.java:3571)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
> at 
> java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.hedgedFetchBlockByteRange(DFSInputStream.java:1280)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1477)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1439)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.FileLink$FileLinkInputStream.read(FileLink.java:167)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:757)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1457)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1682)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1542)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:445)
> at 
> org.apache.hadoop.hbase.util.CompoundBloomFilter.contains(CompoundBloomFilter.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesGeneralBloomFilter(StoreFile.java:1383)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesBloomFilter(StoreFile.java:1247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:393)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:192)
> at 
> org.apache.hadoop.hbase.regionserver.HS

[jira] [Work stopped] (HDFS-11719) Arrays.fill() wrong index in BlockSender.readChecksum() exception handling

2017-05-12 Thread Tao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11719 stopped by Tao Zhang.

> Arrays.fill() wrong index in BlockSender.readChecksum() exception handling
> --
>
> Key: HDFS-11719
> URL: https://issues.apache.org/jira/browse/HDFS-11719
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tao Zhang
>Assignee: Tao Zhang
>
> In BlockSender.readChecksum() exception handling part:
> Arrays.fill(buf, checksumOffset, checksumLen, (byte) 0);
> Actually the paramters should be: Arrays.fill(buf, , , 
> value);
> So it should be changed to:
> Arrays.fill(buf, checksumOffset, checksumOffset + checksumLen, (byte) 0);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-11719) Arrays.fill() wrong index in BlockSender.readChecksum() exception handling

2017-05-12 Thread Tao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11719 started by Tao Zhang.

> Arrays.fill() wrong index in BlockSender.readChecksum() exception handling
> --
>
> Key: HDFS-11719
> URL: https://issues.apache.org/jira/browse/HDFS-11719
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tao Zhang
>Assignee: Tao Zhang
>
> In BlockSender.readChecksum() exception handling part:
> Arrays.fill(buf, checksumOffset, checksumLen, (byte) 0);
> Actually the paramters should be: Arrays.fill(buf, , , 
> value);
> So it should be changed to:
> Arrays.fill(buf, checksumOffset, checksumOffset + checksumLen, (byte) 0);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-11719) Arrays.fill() wrong index in BlockSender.readChecksum() exception handling

2017-05-12 Thread Tao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11719 started by Tao Zhang.

> Arrays.fill() wrong index in BlockSender.readChecksum() exception handling
> --
>
> Key: HDFS-11719
> URL: https://issues.apache.org/jira/browse/HDFS-11719
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tao Zhang
>Assignee: Tao Zhang
>
> In BlockSender.readChecksum() exception handling part:
> Arrays.fill(buf, checksumOffset, checksumLen, (byte) 0);
> Actually the paramters should be: Arrays.fill(buf, , , 
> value);
> So it should be changed to:
> Arrays.fill(buf, checksumOffset, checksumOffset + checksumLen, (byte) 0);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008917#comment-16008917
 ] 

Hudson commented on HDFS-11818:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11732 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11732/])
HDFS-11818. TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails 
(jlowe: rev 2397a2626e22d002174f4a36891d713a7e1f1b20)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> ---
>
> Key: HDFS-11818
> URL: https://issues.apache.org/jira/browse/HDFS-11818
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.8.2
>Reporter: Eric Badger
>Assignee: Nathan Roberts
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.2
>
> Attachments: HDFS-11818-branch-2.patch, HDFS-11818.patch
>
>
> Saw a weird Mockito failure in last night's build with the following stack 
> trace:
> {noformat}
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
> INodeFile cannot be returned by isRunning()
> isRunning() should return boolean
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
> {noformat}
> This is pretty confusing since we explicitly set isRunning() to return true 
> in TestBlockManager's \@Before method
> {noformat}
> 154Mockito.doReturn(true).when(fsn).isRunning();
> {noformat}
> Also saw the following exception in the logs:
> {noformat}
> 2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
> (BlockManager.java:run(2796)) - Error while processing replication queues 
> async
> org.mockito.exceptions.base.MockitoException: 
> 'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
> *return value*!
> Voids are usually stubbed with Throwables:
> doThrow(exception).when(mock).someVoidMethod();
> If the method you are trying to stub is *overloaded* then make sure you are 
> calling the right overloaded version.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
> {noformat}
> This is also weird since we don't do any explicit mocking with 
> {{writeLockInterruptibly}} via fsn in the test. It has to be something 
> changing the mocks or non-thread safe access or something like that. I can't 
> explain the failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11819) HDFS client with hedged read, handle exceptions from callable when the hedged read thread pool is exhausted

2017-05-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008902#comment-16008902
 ] 

Andrew Wang commented on HDFS-11819:


Added and assigned, thanks [~huaxiang]!

> HDFS client with hedged read, handle exceptions from callable  when the 
> hedged read thread pool is exhausted
> 
>
> Key: HDFS-11819
> URL: https://issues.apache.org/jira/browse/HDFS-11819
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> When the hedged read thread pool is exhausted, the current behavior is that 
> callable will be executed in the current thread context. The callable can 
> throw out IOExceptions which is not handled and it will not start a 'hedged' 
> read. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1131
> Please see the following exception:
> {code}
> 2017-05-11 22:42:35,883 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O 
> error constructing remote block reader.
> org.apache.hadoop.net.ConnectTimeoutException: 3000 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/*.*.*.*:50010]
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
> at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3527)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:840)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:755)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1179)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.access$300(DFSInputStream.java:91)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1141)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1133)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(ThreadPoolExecutor.java:2022)
> at 
> org.apache.hadoop.hdfs.DFSClient$2.rejectedExecution(DFSClient.java:3571)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
> at 
> java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.hedgedFetchBlockByteRange(DFSInputStream.java:1280)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1477)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1439)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.FileLink$FileLinkInputStream.read(FileLink.java:167)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:757)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1457)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1682)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1542)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:445)
> at 
> org.apache.hadoop.hbase.util.CompoundBloomFilter.contains(CompoundBloomFilter.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesGeneralBloomFilter(StoreFile.java:1383)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesBloomFilter(StoreFile.java:1247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:393)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:192)
> at 
> org.apache.hadoo

[jira] [Assigned] (HDFS-11819) HDFS client with hedged read, handle exceptions from callable when the hedged read thread pool is exhausted

2017-05-12 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-11819:
--

Assignee: huaxiang sun

> HDFS client with hedged read, handle exceptions from callable  when the 
> hedged read thread pool is exhausted
> 
>
> Key: HDFS-11819
> URL: https://issues.apache.org/jira/browse/HDFS-11819
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> When the hedged read thread pool is exhausted, the current behavior is that 
> callable will be executed in the current thread context. The callable can 
> throw out IOExceptions which is not handled and it will not start a 'hedged' 
> read. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1131
> Please see the following exception:
> {code}
> 2017-05-11 22:42:35,883 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O 
> error constructing remote block reader.
> org.apache.hadoop.net.ConnectTimeoutException: 3000 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/*.*.*.*:50010]
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
> at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3527)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:840)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:755)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1179)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.access$300(DFSInputStream.java:91)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1141)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1133)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(ThreadPoolExecutor.java:2022)
> at 
> org.apache.hadoop.hdfs.DFSClient$2.rejectedExecution(DFSClient.java:3571)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
> at 
> java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.hedgedFetchBlockByteRange(DFSInputStream.java:1280)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1477)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1439)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.FileLink$FileLinkInputStream.read(FileLink.java:167)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:757)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1457)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1682)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1542)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:445)
> at 
> org.apache.hadoop.hbase.util.CompoundBloomFilter.contains(CompoundBloomFilter.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesGeneralBloomFilter(StoreFile.java:1383)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesBloomFilter(StoreFile.java:1247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:393)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:192)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:2106)
>   

[jira] [Updated] (HDFS-11811) Ozone: SCM: Support Delete Block

2017-05-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11811:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~anu] for the review. I've commit the patch to the feature branch. 

> Ozone: SCM: Support Delete Block
> 
>
> Key: HDFS-11811
> URL: https://issues.apache.org/jira/browse/HDFS-11811
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-11811-HDFS-7240.001.patch, 
> HDFS-11811-HDFS-7240.002.patch
>
>
> This is a follow up of HDFS-11504 to add delete block API. For each deleted 
> block, we will remove the original entry for the block and add a tombstone in 
> the blockstore db. 
> We will add an async thread to process the tombstones and wait for the 
> container report confirmation to eventually clean up the tombstones. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HDFS-11818:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.2
   3.0.0-alpha3
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks to Nathan for the contribution and to Eric for additional review!  I 
committed this to trunk, branch-2, and branch-2.8.

> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> ---
>
> Key: HDFS-11818
> URL: https://issues.apache.org/jira/browse/HDFS-11818
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.8.2
>Reporter: Eric Badger
>Assignee: Nathan Roberts
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.2
>
> Attachments: HDFS-11818-branch-2.patch, HDFS-11818.patch
>
>
> Saw a weird Mockito failure in last night's build with the following stack 
> trace:
> {noformat}
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
> INodeFile cannot be returned by isRunning()
> isRunning() should return boolean
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
> {noformat}
> This is pretty confusing since we explicitly set isRunning() to return true 
> in TestBlockManager's \@Before method
> {noformat}
> 154Mockito.doReturn(true).when(fsn).isRunning();
> {noformat}
> Also saw the following exception in the logs:
> {noformat}
> 2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
> (BlockManager.java:run(2796)) - Error while processing replication queues 
> async
> org.mockito.exceptions.base.MockitoException: 
> 'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
> *return value*!
> Voids are usually stubbed with Throwables:
> doThrow(exception).when(mock).someVoidMethod();
> If the method you are trying to stub is *overloaded* then make sure you are 
> calling the right overloaded version.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
> {noformat}
> This is also weird since we don't do any explicit mocking with 
> {{writeLockInterruptibly}} via fsn in the test. It has to be something 
> changing the mocks or non-thread safe access or something like that. I can't 
> explain the failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11808) Backport HDFS-8549 to branch-2.7: Abort the balancer if an upgrade is in progress

2017-05-12 Thread Vinitha Reddy Gankidi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinitha Reddy Gankidi reassigned HDFS-11808:


Assignee: (was: Vinitha Reddy Gankidi)

> Backport HDFS-8549 to branch-2.7: Abort the balancer if an upgrade is in 
> progress
> -
>
> Key: HDFS-11808
> URL: https://issues.apache.org/jira/browse/HDFS-11808
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinitha Reddy Gankidi
>
> As per discussussion in [mailling 
> list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
>  backport HDFS-8549 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11805) Ensure LevelDB DBIterator is closed

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008852#comment-16008852
 ] 

Hadoop QA commented on HDFS-11805:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 1 unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.cblock.TestLocalBlockCache |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11805 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867853/HDFS-11805-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b4e699761520 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 7bf301e |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19421/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19421/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19421/te

[jira] [Commented] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008847#comment-16008847
 ] 

Jason Lowe commented on HDFS-11818:
---

Test failures are unrelated.

+1 lgtm.  Committing this.


> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> ---
>
> Key: HDFS-11818
> URL: https://issues.apache.org/jira/browse/HDFS-11818
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.8.2
>Reporter: Eric Badger
>Assignee: Nathan Roberts
> Attachments: HDFS-11818-branch-2.patch, HDFS-11818.patch
>
>
> Saw a weird Mockito failure in last night's build with the following stack 
> trace:
> {noformat}
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
> INodeFile cannot be returned by isRunning()
> isRunning() should return boolean
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
> {noformat}
> This is pretty confusing since we explicitly set isRunning() to return true 
> in TestBlockManager's \@Before method
> {noformat}
> 154Mockito.doReturn(true).when(fsn).isRunning();
> {noformat}
> Also saw the following exception in the logs:
> {noformat}
> 2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
> (BlockManager.java:run(2796)) - Error while processing replication queues 
> async
> org.mockito.exceptions.base.MockitoException: 
> 'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
> *return value*!
> Voids are usually stubbed with Throwables:
> doThrow(exception).when(mock).someVoidMethod();
> If the method you are trying to stub is *overloaded* then make sure you are 
> calling the right overloaded version.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
> {noformat}
> This is also weird since we don't do any explicit mocking with 
> {{writeLockInterruptibly}} via fsn in the test. It has to be something 
> changing the mocks or non-thread safe access or something like that. I can't 
> explain the failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11819) HDFS client with hedged read, handle exceptions from callable when the hedged read thread pool is exhausted

2017-05-12 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008794#comment-16008794
 ] 

huaxiang sun commented on HDFS-11819:
-

Can someone help to assign the jira to me? I will try  to work on a patch, 
thanks!

> HDFS client with hedged read, handle exceptions from callable  when the 
> hedged read thread pool is exhausted
> 
>
> Key: HDFS-11819
> URL: https://issues.apache.org/jira/browse/HDFS-11819
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: huaxiang sun
>
> When the hedged read thread pool is exhausted, the current behavior is that 
> callable will be executed in the current thread context. The callable can 
> throw out IOExceptions which is not handled and it will not start a 'hedged' 
> read. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1131
> Please see the following exception:
> {code}
> 2017-05-11 22:42:35,883 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O 
> error constructing remote block reader.
> org.apache.hadoop.net.ConnectTimeoutException: 3000 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/*.*.*.*:50010]
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
> at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3527)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:840)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:755)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1179)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.access$300(DFSInputStream.java:91)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1141)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1133)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(ThreadPoolExecutor.java:2022)
> at 
> org.apache.hadoop.hdfs.DFSClient$2.rejectedExecution(DFSClient.java:3571)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
> at 
> java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.hedgedFetchBlockByteRange(DFSInputStream.java:1280)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1477)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1439)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.FileLink$FileLinkInputStream.read(FileLink.java:167)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:757)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1457)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1682)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1542)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:445)
> at 
> org.apache.hadoop.hbase.util.CompoundBloomFilter.contains(CompoundBloomFilter.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesGeneralBloomFilter(StoreFile.java:1383)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesBloomFilter(StoreFile.java:1247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:393)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:192)
> at 
> org.ap

[jira] [Created] (HDFS-11819) HDFS client with hedged read, handle exceptions from callable when the hedged read thread pool is exhausted

2017-05-12 Thread huaxiang sun (JIRA)
huaxiang sun created HDFS-11819:
---

 Summary: HDFS client with hedged read, handle exceptions from 
callable  when the hedged read thread pool is exhausted
 Key: HDFS-11819
 URL: https://issues.apache.org/jira/browse/HDFS-11819
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2
Reporter: huaxiang sun


When the hedged read thread pool is exhausted, the current behavior is that 
callable will be executed in the current thread context. The callable can throw 
out IOExceptions which is not handled and it will not start a 'hedged' read. 

https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1131

Please see the following exception:
{code}
2017-05-11 22:42:35,883 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O 
error constructing remote block reader.
org.apache.hadoop.net.ConnectTimeoutException: 3000 millis timeout while 
waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending remote=/*.*.*.*:50010]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3527)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:840)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:755)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376)
at 
org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1179)
at 
org.apache.hadoop.hdfs.DFSInputStream.access$300(DFSInputStream.java:91)
at 
org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1141)
at 
org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1133)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(ThreadPoolExecutor.java:2022)
at 
org.apache.hadoop.hdfs.DFSClient$2.rejectedExecution(DFSClient.java:3571)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at 
java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
at 
org.apache.hadoop.hdfs.DFSInputStream.hedgedFetchBlockByteRange(DFSInputStream.java:1280)
at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1477)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1439)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
at 
org.apache.hadoop.hbase.io.FileLink$FileLinkInputStream.read(FileLink.java:167)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:757)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1457)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1682)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1542)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:445)
at 
org.apache.hadoop.hbase.util.CompoundBloomFilter.contains(CompoundBloomFilter.java:100)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesGeneralBloomFilter(StoreFile.java:1383)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesBloomFilter(StoreFile.java:1247)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:469)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:393)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:312)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:192)
at 
org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:2106)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2096)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5544)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2569)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2555)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2536)
  

[jira] [Commented] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008707#comment-16008707
 ] 

Eric Badger commented on HDFS-11818:


lgtm +1 (non-binding)

> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> ---
>
> Key: HDFS-11818
> URL: https://issues.apache.org/jira/browse/HDFS-11818
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.8.2
>Reporter: Eric Badger
>Assignee: Nathan Roberts
> Attachments: HDFS-11818-branch-2.patch, HDFS-11818.patch
>
>
> Saw a weird Mockito failure in last night's build with the following stack 
> trace:
> {noformat}
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
> INodeFile cannot be returned by isRunning()
> isRunning() should return boolean
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
> {noformat}
> This is pretty confusing since we explicitly set isRunning() to return true 
> in TestBlockManager's \@Before method
> {noformat}
> 154Mockito.doReturn(true).when(fsn).isRunning();
> {noformat}
> Also saw the following exception in the logs:
> {noformat}
> 2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
> (BlockManager.java:run(2796)) - Error while processing replication queues 
> async
> org.mockito.exceptions.base.MockitoException: 
> 'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
> *return value*!
> Voids are usually stubbed with Throwables:
> doThrow(exception).when(mock).someVoidMethod();
> If the method you are trying to stub is *overloaded* then make sure you are 
> calling the right overloaded version.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
> {noformat}
> This is also weird since we don't do any explicit mocking with 
> {{writeLockInterruptibly}} via fsn in the test. It has to be something 
> changing the mocks or non-thread safe access or something like that. I can't 
> explain the failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008705#comment-16008705
 ] 

Hadoop QA commented on HDFS-11818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
45s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867834/HDFS-11818.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fde7f9a06906 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a9e24a1 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19419/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19419/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19419/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19419/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> -

[jira] [Updated] (HDFS-11805) Ensure LevelDB DBIterator is closed

2017-05-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11805:
--
Attachment: HDFS-11805-HDFS-7240.002.patch

Thanks [~xyao] for the review! Post v002 patch to fix the checkstyle warning.

> Ensure LevelDB DBIterator is closed
> ---
>
> Key: HDFS-11805
> URL: https://issues.apache.org/jira/browse/HDFS-11805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11805-HDFS-7240.001.patch, 
> HDFS-11805-HDFS-7240.002.patch
>
>
> DBIterator used in CblockManager/FilteredKeys/SQLCLI/OzoneMetadataManager 
> should be closed after done with the iteration to avoid resource leaking. 
> try-with-resource should fix that easily for most of the cases. FIlteredKeys 
> may be a special case that I have not fully checked yet. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11792) [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008671#comment-16008671
 ] 

Hadoop QA commented on HDFS-11792:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
45s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-9806 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-9806 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11792 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867828/HDFS-11792-HDFS-9806.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5a232151f8c9 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-9806 / d0fc899 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19418/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19418/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19418/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19418/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19418/console |
| Pow

[jira] [Commented] (HDFS-11802) Ozone : add DEBUG CLI support for open container db file

2017-05-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008647#comment-16008647
 ] 

Chen Liang commented on HDFS-11802:
---

Thanks [~anu] for the comment! Committed to the feature branch.

> Ozone : add DEBUG CLI support for open container db file
> 
>
> Key: HDFS-11802
> URL: https://issues.apache.org/jira/browse/HDFS-11802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11802-HDFS-7240.001.patch, 
> HDFS-11802-HDFS-7240.002.patch, HDFS-11802-HDFS-7240.003.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the converting of 
> openContainer.db levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11802) Ozone : add DEBUG CLI support for open container db file

2017-05-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11802:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone : add DEBUG CLI support for open container db file
> 
>
> Key: HDFS-11802
> URL: https://issues.apache.org/jira/browse/HDFS-11802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11802-HDFS-7240.001.patch, 
> HDFS-11802-HDFS-7240.002.patch, HDFS-11802-HDFS-7240.003.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the converting of 
> openContainer.db levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008624#comment-16008624
 ] 

Hadoop QA commented on HDFS-11791:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 7s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-9806 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867832/HDFS-11791-HDFS-9806.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 09120d181e56 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-9806 / d0fc899 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19420/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-fs2img-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19420/testReport/ |
| modules | C: hadoop-tools/hadoop-fs2img U: hadoop-tools/hadoop-fs2img |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19420/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attac

[jira] [Commented] (HDFS-11815) CBlockManager#main should join() after start() service

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008591#comment-16008591
 ] 

Hadoop QA commented on HDFS-11815:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11815 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867810/HDFS-11815-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ed0e65ebb285 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 2796b34 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19416/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19416/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19416/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdf

[jira] [Updated] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated HDFS-11818:
--
Affects Version/s: 3.0.0-alpha2
   Status: Patch Available  (was: Open)

> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> ---
>
> Key: HDFS-11818
> URL: https://issues.apache.org/jira/browse/HDFS-11818
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.8.2
>Reporter: Eric Badger
>Assignee: Nathan Roberts
> Attachments: HDFS-11818-branch-2.patch, HDFS-11818.patch
>
>
> Saw a weird Mockito failure in last night's build with the following stack 
> trace:
> {noformat}
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
> INodeFile cannot be returned by isRunning()
> isRunning() should return boolean
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
> {noformat}
> This is pretty confusing since we explicitly set isRunning() to return true 
> in TestBlockManager's \@Before method
> {noformat}
> 154Mockito.doReturn(true).when(fsn).isRunning();
> {noformat}
> Also saw the following exception in the logs:
> {noformat}
> 2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
> (BlockManager.java:run(2796)) - Error while processing replication queues 
> async
> org.mockito.exceptions.base.MockitoException: 
> 'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
> *return value*!
> Voids are usually stubbed with Throwables:
> doThrow(exception).when(mock).someVoidMethod();
> If the method you are trying to stub is *overloaded* then make sure you are 
> calling the right overloaded version.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
> {noformat}
> This is also weird since we don't do any explicit mocking with 
> {{writeLockInterruptibly}} via fsn in the test. It has to be something 
> changing the mocks or non-thread safe access or something like that. I can't 
> explain the failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated HDFS-11818:
--
Attachment: HDFS-11818.patch
HDFS-11818-branch-2.patch

Patches for trunk and branch-2. branch-2 patch picks cleanly to 2.8

> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> ---
>
> Key: HDFS-11818
> URL: https://issues.apache.org/jira/browse/HDFS-11818
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.8.2
>Reporter: Eric Badger
>Assignee: Nathan Roberts
> Attachments: HDFS-11818-branch-2.patch, HDFS-11818.patch
>
>
> Saw a weird Mockito failure in last night's build with the following stack 
> trace:
> {noformat}
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
> INodeFile cannot be returned by isRunning()
> isRunning() should return boolean
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
> {noformat}
> This is pretty confusing since we explicitly set isRunning() to return true 
> in TestBlockManager's \@Before method
> {noformat}
> 154Mockito.doReturn(true).when(fsn).isRunning();
> {noformat}
> Also saw the following exception in the logs:
> {noformat}
> 2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
> (BlockManager.java:run(2796)) - Error while processing replication queues 
> async
> org.mockito.exceptions.base.MockitoException: 
> 'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
> *return value*!
> Voids are usually stubbed with Throwables:
> doThrow(exception).when(mock).someVoidMethod();
> If the method you are trying to stub is *overloaded* then make sure you are 
> calling the right overloaded version.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
> {noformat}
> This is also weird since we don't do any explicit mocking with 
> {{writeLockInterruptibly}} via fsn in the test. It has to be something 
> changing the mocks or non-thread safe access or something like that. I can't 
> explain the failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11791:
--
Status: Open  (was: Patch Available)

> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11791-HDFS-9806.001.patch, 
> HDFS-11791-HDFS-9806.002.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11791:
--
Status: Patch Available  (was: Open)

> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11791-HDFS-9806.001.patch, 
> HDFS-11791-HDFS-9806.002.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11791:
--
Attachment: HDFS-11791-HDFS-9806.002.patch

New patch fixing the checkstyle errors in the earlier patch

> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11791-HDFS-9806.001.patch, 
> HDFS-11791-HDFS-9806.002.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008558#comment-16008558
 ] 

Hadoop QA commented on HDFS-11791:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
12s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-9806 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-fs2img: The patch generated 
2 new + 18 unchanged - 0 fixed = 20 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867825/HDFS-11791-HDFS-9806.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f6edd18becda 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-9806 / d0fc899 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19417/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-fs2img-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19417/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-fs2img.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19417/testReport/ |
| modules | C: hadoop-tools/hadoop-fs2img U: hadoop-tools/hadoop-fs2img |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19417/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> 

[jira] [Updated] (HDFS-11792) [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11792:
--
Summary: [READ] Test cases for ProvidedVolumeDF and 
ProviderBlockIteratorImpl  (was: [READ] Additional test cases for 
ProvidedVolumeImpl)

> [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl
> 
>
> Key: HDFS-11792
> URL: https://issues.apache.org/jira/browse/HDFS-11792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11792-HDFS-9806.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11792) [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11792:
--
Status: Patch Available  (was: Open)

> [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl
> 
>
> Key: HDFS-11792
> URL: https://issues.apache.org/jira/browse/HDFS-11792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11792-HDFS-9806.001.patch
>
>
> Test cases for {{ProvidedVolumeDF}} and {{ProviderBlockIteratorImpl}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11792) [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11792:
--
Description: Test cases for {{ProvidedVolumeDF}} and 
{{ProviderBlockIteratorImpl}}

> [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl
> 
>
> Key: HDFS-11792
> URL: https://issues.apache.org/jira/browse/HDFS-11792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11792-HDFS-9806.001.patch
>
>
> Test cases for {{ProvidedVolumeDF}} and {{ProviderBlockIteratorImpl}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11792) [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11792:
--
Attachment: HDFS-11792-HDFS-9806.001.patch

> [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl
> 
>
> Key: HDFS-11792
> URL: https://issues.apache.org/jira/browse/HDFS-11792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11792-HDFS-9806.001.patch
>
>
> Test cases for {{ProvidedVolumeDF}} and {{ProviderBlockIteratorImpl}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11639) [READ] Encode the BlockAlias in the client protocol

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-11639:
-

Assignee: Ewan Higgs  (was: Virajith Jalaparti)

> [READ] Encode the BlockAlias in the client protocol
> ---
>
> Key: HDFS-11639
> URL: https://issues.apache.org/jira/browse/HDFS-11639
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-11639-HDFS-9806.001.patch, 
> HDFS-11639-HDFS-9806.002.patch
>
>
> As part of the {{PROVIDED}} storage type, we have a {{BlockAlias}} type which 
> encodes information about where the data comes from. i.e. URI, offset, 
> length, and nonce value. This data should be encoded in the protocol 
> ({{LocatedBlockProto}} and the {{BlockTokenIdentifier}}) when a block is 
> available using the PROVIDED storage type.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11673) [READ] Handle failures of Datanodes with PROVIDED storage

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-11673:
-

Assignee: Virajith Jalaparti

> [READ] Handle failures of Datanodes with PROVIDED storage
> -
>
> Key: HDFS-11673
> URL: https://issues.apache.org/jira/browse/HDFS-11673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11673-HDFS-9806.001.patch, 
> HDFS-11673-HDFS-9806.002.patch, HDFS-11673-HDFS-9806.003.patch
>
>
> Blocks on {{PROVIDED}} storage should become unavailable if and only if all 
> Datanodes that are configured with {{PROVIDED}} storage become unavailable. 
> Even if one Datanode with {{PROVIDED}} storage is available, all blocks on 
> the {{PROVIDED}} storage should be accessible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11791:
--
Status: Patch Available  (was: Open)

> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11791-HDFS-9806.001.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-11791:
-

Assignee: Virajith Jalaparti

> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11791-HDFS-9806.001.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11792) [READ] Additional test cases for ProvidedVolumeImpl

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-11792:
-

Assignee: Virajith Jalaparti

> [READ] Additional test cases for ProvidedVolumeImpl
> ---
>
> Key: HDFS-11792
> URL: https://issues.apache.org/jira/browse/HDFS-11792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11791:
--
Attachment: HDFS-11791-HDFS-9806.001.patch

Initial patch attached

> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11791-HDFS-9806.001.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11791:
--
Description: Test whether increasing the replication of a file with storage 
policy {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).

> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11791-HDFS-9806.001.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008522#comment-16008522
 ] 

Robert Kanter commented on HDFS-11816:
--

+1 pending comments on HADOOP-14417.

> Update ciphers list for HttpFS
> --
>
> Key: HDFS-11816
> URL: https://issues.apache.org/jira/browse/HDFS-11816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11816.branch-2.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11811) Ozone: SCM: Support Delete Block

2017-05-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008495#comment-16008495
 ] 

Anu Engineer commented on HDFS-11811:
-

+1, LGTM.  Please feel free to commit after taking care of the jenkins warnings.


> Ozone: SCM: Support Delete Block
> 
>
> Key: HDFS-11811
> URL: https://issues.apache.org/jira/browse/HDFS-11811
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11811-HDFS-7240.001.patch, 
> HDFS-11811-HDFS-7240.002.patch
>
>
> This is a follow up of HDFS-11504 to add delete block API. For each deleted 
> block, we will remove the original entry for the block and add a tombstone in 
> the blockstore db. 
> We will add an async thread to process the tombstones and wait for the 
> container report confirmation to eventually clean up the tombstones. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008479#comment-16008479
 ] 

Hadoop QA commented on HDFS-11816:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
33s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m 
16s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 7s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:8515d35 |
| JIRA Issue | HDFS-11816 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867809/HDFS-11816.branch-2.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux a7ae2ddad21e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 53d9f56 |
| shellcheck | v0.4.6 |
| JDK v1.7.0_121  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19415/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19415/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update ciphers list for HttpFS
> --
>
> Key: HDFS-11816
> URL: https://issues.apache.org/jira/browse/HDFS-11816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11816.branch-2.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11815) CBlockManager#main should join() after start() service

2017-05-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11815:
--
Attachment: HDFS-11815-HDFS-7240.001.patch

Thanks [~xyao] for the catch! Posv v001 patch to add join().

> CBlockManager#main should join() after start() service 
> ---
>
> Key: HDFS-11815
> URL: https://issues.apache.org/jira/browse/HDFS-11815
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-11815-HDFS-7240.001.patch
>
>
> This seems to be missing from HDFS-11631, which might cause problem when 
> start/stop cblock server with the new hdfs cli. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11815) CBlockManager#main should join() after start() service

2017-05-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11815:
--
Status: Patch Available  (was: Open)

> CBlockManager#main should join() after start() service 
> ---
>
> Key: HDFS-11815
> URL: https://issues.apache.org/jira/browse/HDFS-11815
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-11815-HDFS-7240.001.patch
>
>
> This seems to be missing from HDFS-11631, which might cause problem when 
> start/stop cblock server with the new hdfs cli. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11816:
--
Attachment: HDFS-11816.branch-2.001.patch

Renamed the patch file because it targets branch-2.

> Update ciphers list for HttpFS
> --
>
> Key: HDFS-11816
> URL: https://issues.apache.org/jira/browse/HDFS-11816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11816.branch-2.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11816:
--
Status: Patch Available  (was: Open)

> Update ciphers list for HttpFS
> --
>
> Key: HDFS-11816
> URL: https://issues.apache.org/jira/browse/HDFS-11816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11816.branch-2.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11816:
--
Status: Open  (was: Patch Available)

> Update ciphers list for HttpFS
> --
>
> Key: HDFS-11816
> URL: https://issues.apache.org/jira/browse/HDFS-11816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11816:
--
Attachment: (was: HDFS-11816.001.patch)

> Update ciphers list for HttpFS
> --
>
> Key: HDFS-11816
> URL: https://issues.apache.org/jira/browse/HDFS-11816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008364#comment-16008364
 ] 

Hadoop QA commented on HDFS-11816:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-11816 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11816 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867801/HDFS-11816.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19414/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update ciphers list for HttpFS
> --
>
> Key: HDFS-11816
> URL: https://issues.apache.org/jira/browse/HDFS-11816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11816.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11816:
--
Status: Patch Available  (was: Open)

> Update ciphers list for HttpFS
> --
>
> Key: HDFS-11816
> URL: https://issues.apache.org/jira/browse/HDFS-11816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11816.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11816:
--
Attachment: HDFS-11816.001.patch

Patch 001
* Remove DHE ciphers from the default list

Testing done
* sslscan no longer lists DHE ciphers

> Update ciphers list for HttpFS
> --
>
> Key: HDFS-11816
> URL: https://issues.apache.org/jira/browse/HDFS-11816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11816.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HDFS-11818:
--
Affects Version/s: 2.8.2

> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> ---
>
> Key: HDFS-11818
> URL: https://issues.apache.org/jira/browse/HDFS-11818
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Eric Badger
>Assignee: Nathan Roberts
>
> Saw a weird Mockito failure in last night's build with the following stack 
> trace:
> {noformat}
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
> INodeFile cannot be returned by isRunning()
> isRunning() should return boolean
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
> {noformat}
> This is pretty confusing since we explicitly set isRunning() to return true 
> in TestBlockManager's \@Before method
> {noformat}
> 154Mockito.doReturn(true).when(fsn).isRunning();
> {noformat}
> Also saw the following exception in the logs:
> {noformat}
> 2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
> (BlockManager.java:run(2796)) - Error while processing replication queues 
> async
> org.mockito.exceptions.base.MockitoException: 
> 'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
> *return value*!
> Voids are usually stubbed with Throwables:
> doThrow(exception).when(mock).someVoidMethod();
> If the method you are trying to stub is *overloaded* then make sure you are 
> calling the right overloaded version.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
> {noformat}
> This is also weird since we don't do any explicit mocking with 
> {{writeLockInterruptibly}} via fsn in the test. It has to be something 
> changing the mocks or non-thread safe access or something like that. I can't 
> explain the failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008325#comment-16008325
 ] 

Nathan Roberts commented on HDFS-11818:
---

Know what the issue is. Will post a patch shortly. 

> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> ---
>
> Key: HDFS-11818
> URL: https://issues.apache.org/jira/browse/HDFS-11818
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Nathan Roberts
>
> Saw a weird Mockito failure in last night's build with the following stack 
> trace:
> {noformat}
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
> INodeFile cannot be returned by isRunning()
> isRunning() should return boolean
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
> {noformat}
> This is pretty confusing since we explicitly set isRunning() to return true 
> in TestBlockManager's \@Before method
> {noformat}
> 154Mockito.doReturn(true).when(fsn).isRunning();
> {noformat}
> Also saw the following exception in the logs:
> {noformat}
> 2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
> (BlockManager.java:run(2796)) - Error while processing replication queues 
> async
> org.mockito.exceptions.base.MockitoException: 
> 'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
> *return value*!
> Voids are usually stubbed with Throwables:
> doThrow(exception).when(mock).someVoidMethod();
> If the method you are trying to stub is *overloaded* then make sure you are 
> calling the right overloaded version.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
> {noformat}
> This is also weird since we don't do any explicit mocking with 
> {{writeLockInterruptibly}} via fsn in the test. It has to be something 
> changing the mocks or non-thread safe access or something like that. I can't 
> explain the failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts reassigned HDFS-11818:
-

Assignee: Nathan Roberts

> TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently
> ---
>
> Key: HDFS-11818
> URL: https://issues.apache.org/jira/browse/HDFS-11818
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Nathan Roberts
>
> Saw a weird Mockito failure in last night's build with the following stack 
> trace:
> {noformat}
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
> INodeFile cannot be returned by isRunning()
> isRunning() should return boolean
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
> {noformat}
> This is pretty confusing since we explicitly set isRunning() to return true 
> in TestBlockManager's \@Before method
> {noformat}
> 154Mockito.doReturn(true).when(fsn).isRunning();
> {noformat}
> Also saw the following exception in the logs:
> {noformat}
> 2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
> (BlockManager.java:run(2796)) - Error while processing replication queues 
> async
> org.mockito.exceptions.base.MockitoException: 
> 'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
> *return value*!
> Voids are usually stubbed with Throwables:
> doThrow(exception).when(mock).someVoidMethod();
> If the method you are trying to stub is *overloaded* then make sure you are 
> calling the right overloaded version.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
> {noformat}
> This is also weird since we don't do any explicit mocking with 
> {{writeLockInterruptibly}} via fsn in the test. It has to be something 
> changing the mocks or non-thread safe access or something like that. I can't 
> explain the failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11818) TestBlockManager.testSufficientlyReplBlocksUsesNewRack fails intermittently

2017-05-12 Thread Eric Badger (JIRA)
Eric Badger created HDFS-11818:
--

 Summary: TestBlockManager.testSufficientlyReplBlocksUsesNewRack 
fails intermittently
 Key: HDFS-11818
 URL: https://issues.apache.org/jira/browse/HDFS-11818
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eric Badger


Saw a weird Mockito failure in last night's build with the following stack 
trace:
{noformat}
org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
INodeFile cannot be returned by isRunning()
isRunning() should return boolean
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:555)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:404)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testSufficientlyReplBlocksUsesNewRack(TestBlockManager.java:397)
{noformat}
This is pretty confusing since we explicitly set isRunning() to return true in 
TestBlockManager's \@Before method
{noformat}
154Mockito.doReturn(true).when(fsn).isRunning();
{noformat}

Also saw the following exception in the logs:
{noformat}
2017-05-12 05:42:27,903 ERROR blockmanagement.BlockManager 
(BlockManager.java:run(2796)) - Error while processing replication queues async
org.mockito.exceptions.base.MockitoException: 
'writeLockInterruptibly' is a *void method* and it *cannot* be stubbed with a 
*return value*!
Voids are usually stubbed with Throwables:
doThrow(exception).when(mock).someVoidMethod();
If the method you are trying to stub is *overloaded* then make sure you are 
calling the right overloaded version.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2841)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.access$100(BlockManager.java:120)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$1.run(BlockManager.java:2792)
{noformat}
This is also weird since we don't do any explicit mocking with 
{{writeLockInterruptibly}} via fsn in the test. It has to be something changing 
the mocks or non-thread safe access or something like that. I can't explain the 
failures otherwise. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11817) A faulty node can cause a lease leak and NPE on accessing data

2017-05-12 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-11817:
-

Assignee: Kihwal Lee

> A faulty node can cause a lease leak and NPE on accessing data
> --
>
> Key: HDFS-11817
> URL: https://issues.apache.org/jira/browse/HDFS-11817
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
>
> When the namenode performs a lease recovery for a failed write, the 
> {{commitBlockSynchronization()}} will fail, if none of the new target has 
> sent a received-IBR.  At this point, the data is inaccessible, as the 
> namenode will throw a {{NullPointerException}} upon {{getBlockLocations()}}.
> The lease recovery will be retried in about an hour by the namenode. If the 
> nodes are faulty (usually when there is only one new target), they may not 
> block report until this point. If this happens, lease recovery throws an 
> {{AlreadyBeingCreatedException}}, which causes LeaseManager to simply remove 
> the lease without  finalizing the inode.  
> This results in an inconsistent lease state. The inode stays 
> under-construction, but no more lease recovery is attempted. A manual lease 
> recovery is also not allowed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11817) A faulty node can cause a lease leak and NPE on accessing data

2017-05-12 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-11817:
-

 Summary: A faulty node can cause a lease leak and NPE on accessing 
data
 Key: HDFS-11817
 URL: https://issues.apache.org/jira/browse/HDFS-11817
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Kihwal Lee
Priority: Critical


When the namenode performs a lease recovery for a failed write, the 
{{commitBlockSynchronization()}} will fail, if none of the new target has sent 
a received-IBR.  At this point, the data is inaccessible, as the namenode will 
throw a {{NullPointerException}} upon {{getBlockLocations()}}.

The lease recovery will be retried in about an hour by the namenode. If the 
nodes are faulty (usually when there is only one new target), they may not 
block report until this point. If this happens, lease recovery throws an 
{{AlreadyBeingCreatedException}}, which causes LeaseManager to simply remove 
the lease without  finalizing the inode.  

This results in an inconsistent lease state. The inode stays 
under-construction, but no more lease recovery is attempted. A manual lease 
recovery is also not allowed. 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11816) Update ciphers list for HttpFS

2017-05-12 Thread John Zhuge (JIRA)
John Zhuge created HDFS-11816:
-

 Summary: Update ciphers list for HttpFS
 Key: HDFS-11816
 URL: https://issues.apache.org/jira/browse/HDFS-11816
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs, security
Affects Versions: 2.9.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


In Oracle Linux 6.8 configurations, the curl command cannot connect to certain 
CDH services that run on Apache Tomcat when the cluster has been configured for 
TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services reject connection 
attempts because the default cipher configuration uses weak temporary server 
keys (based on Diffie-Hellman key exchange protocol).

https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11811) Ozone: SCM: Support Delete Block

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007881#comment-16007881
 ] 

Hadoop QA commented on HDFS-11811:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 8 new + 
0 unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.ozone.scm.node.TestContainerPlacement |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11811 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867722/HDFS-11811-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 68fdbeec2d2a 3.13.

[jira] [Updated] (HDFS-11674) reserveSpaceForReplicas is not released if append request failed due to mirror down and replica recovered

2017-05-12 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-11674:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.1
   3.0.0-alpha3
   2.7.4
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~arpitagarwal] and [~brahmareddy] for reviews.
Committed to branch-2.7 as well.

> reserveSpaceForReplicas is not released if append request failed due to 
> mirror down and replica recovered
> -
>
> Key: HDFS-11674
> URL: https://issues.apache.org/jira/browse/HDFS-11674
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Critical
>  Labels: release-blocker
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11674-01.patch, HDFS-11674-02.patch, 
> HDFS-11674-03.patch, HDFS-11674-branch-2.7-03.patch
>
>
> Scenario:
> 1. 3 Node cluster with 
> "dfs.client.block.write.replace-datanode-on-failure.policy"  as DEFAULT
> Block is written with x data.
> 2. One of the Datanode, NOT the first DN, is down
> 3. Client tries to append data to block and fails since one DN is down.
> 4. calls recoverLease() on the file.
> 5. Successfull recovery happens.
> Issue:
> 1. DNs which were connected from client before encountering mirror down, will 
> have the reservedSpaceForReplicas incremented, BUT never decremented. 
> 2. So in long run DN's all space will be in reservedSpaceForReplicas 
> resulting OutOfSpace errors.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10654) Move building of httpfs dependency analysis under "docs" profile

2017-05-12 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-10654.
-
Resolution: Duplicate

> Move building of httpfs dependency analysis under "docs" profile
> 
>
> Key: HDFS-10654
> URL: https://issues.apache.org/jira/browse/HDFS-10654
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HDFS-10654.001.patch
>
>
> When built with "-Pdist" but not "-Pdocs", httpfs still generates a 
> share/docs directory since the dependency report is run unconditionally. 
> Let's move it under the "docs" profile like the rest of the site.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10654) Move building of httpfs dependency analysis under "docs" profile

2017-05-12 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007800#comment-16007800
 ] 

Andras Bokor commented on HDFS-10654:
-

Fixed by HADOOP-14401. Closing.

> Move building of httpfs dependency analysis under "docs" profile
> 
>
> Key: HDFS-10654
> URL: https://issues.apache.org/jira/browse/HDFS-10654
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HDFS-10654.001.patch
>
>
> When built with "-Pdist" but not "-Pdocs", httpfs still generates a 
> share/docs directory since the dependency report is run unconditionally. 
> Let's move it under the "docs" profile like the rest of the site.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11811) Ozone: SCM: Support Delete Block

2017-05-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007762#comment-16007762
 ] 

Xiaoyu Yao commented on HDFS-11811:
---

Thanks [~anu] for the review. Upload v2 patch that addressed the comments. 

bq. 1. what happens if I try to create a volume called "Deleted" Do you want to 
use a character other than capital to indicate this is a system owned bucket ?

Add ".Deleted" prefix.

bq. Would it make more sense to return a set of  like 
repeated deletedstatus deletedkeys = 1; So if you are send a block that does 
not exist, you can return proper error.

Added.

bq.  There is a checkStyle warning.

Fixed.


> Ozone: SCM: Support Delete Block
> 
>
> Key: HDFS-11811
> URL: https://issues.apache.org/jira/browse/HDFS-11811
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11811-HDFS-7240.001.patch, 
> HDFS-11811-HDFS-7240.002.patch
>
>
> This is a follow up of HDFS-11504 to add delete block API. For each deleted 
> block, we will remove the original entry for the block and add a tombstone in 
> the blockstore db. 
> We will add an async thread to process the tombstones and wait for the 
> container report confirmation to eventually clean up the tombstones. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11811) Ozone: SCM: Support Delete Block

2017-05-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11811:
--
Attachment: HDFS-11811-HDFS-7240.002.patch

> Ozone: SCM: Support Delete Block
> 
>
> Key: HDFS-11811
> URL: https://issues.apache.org/jira/browse/HDFS-11811
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11811-HDFS-7240.001.patch, 
> HDFS-11811-HDFS-7240.002.patch
>
>
> This is a follow up of HDFS-11504 to add delete block API. For each deleted 
> block, we will remove the original entry for the block and add a tombstone in 
> the blockstore db. 
> We will add an async thread to process the tombstones and wait for the 
> container report confirmation to eventually clean up the tombstones. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11794) Add ec sub command -listCodec to show currently supported ec codecs

2017-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007734#comment-16007734
 ] 

Hadoop QA commented on HDFS-11794:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
26s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
28s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 56s{color} | {color:orange} root: The patch generated 4 new + 476 unchanged 
- 0 fixed = 480 total (was 476) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 10 
unchanged - 0 fixed = 12 total (was 10) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getErasureCodingCodecs(RpcController,
 ErasureCodingProt