[jira] [Commented] (HDFS-12100) Ozone: KSM: Allocate key should honour volume quota if quota is set on the volume
[ https://issues.apache.org/jira/browse/HDFS-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150210#comment-16150210 ] Lokesh Jain commented on HDFS-12100: [~anu] [~msingh] Please review the patch. I will fix the whitespace issues in the revised patch. > Ozone: KSM: Allocate key should honour volume quota if quota is set on the > volume > - > > Key: HDFS-12100 > URL: https://issues.apache.org/jira/browse/HDFS-12100 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain > Fix For: HDFS-7240 > > Attachments: HDFS-12100-HDFS-7240.001.patch, > HDFS-12100-HDFS-7240.002.patch > > > KeyManagerImpl#allocateKey currently does not check the volume quota before > allocating a key, this can cause the volume quota overrun. > Volume quota needs to be check before allocating the key in the SCM. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150198#comment-16150198 ] lindongdong commented on HDFS-11576: I met this problem, and wait for the patch. Thanks for Lukas :) > Block recovery will fail indefinitely if recovery time > heartbeat interval > --- > > Key: HDFS-11576 > URL: https://issues.apache.org/jira/browse/HDFS-11576 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, namenode >Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, > HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, > HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, > HDFS-11576.009.patch, HDFS-11576.010.patch, HDFS-11576.repro.patch > > > Block recovery will fail indefinitely if the time to recover a block is > always longer than the heartbeat interval. Scenario: > 1. DN sends heartbeat > 2. NN sends a recovery command to DN, recoveryID=X > 3. DN starts recovery > 4. DN sends another heartbeat > 5. NN sends a recovery command to DN, recoveryID=X+1 > 6. DN calls commitBlockSyncronization after succeeding with first recovery to > NN, which fails because X < X+1 > ... -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService
[ https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12370: - Attachment: HDFS-12370-HDFS-7240.002.patch > Ozone: Implement TopN container choosing policy for BlockDeletionService > > > Key: HDFS-12370 > URL: https://issues.apache.org/jira/browse/HDFS-12370 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12370-HDFS-7240.001.patch, > HDFS-12370-HDFS-7240.002.patch > > > Implement TopN container choosing policy for BlockDeletionService. This is > discussed from HDFS-12354. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService
[ https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150164#comment-16150164 ] Yiqun Lin commented on HDFS-12370: -- Thanks [~cheersyang] for the review, all the comments make sense to me. Attach the updated patch. > Ozone: Implement TopN container choosing policy for BlockDeletionService > > > Key: HDFS-12370 > URL: https://issues.apache.org/jira/browse/HDFS-12370 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12370-HDFS-7240.001.patch, > HDFS-12370-HDFS-7240.002.patch > > > Implement TopN container choosing policy for BlockDeletionService. This is > discussed from HDFS-12354. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12300) Audit-log delegation token related operations
[ https://issues.apache.org/jira/browse/HDFS-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150134#comment-16150134 ] Hudson commented on HDFS-12300: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12295 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12295/]) HDFS-12300. Audit-log delegation token related operations. (xiao: rev 1b3b9938cf663c71d2e5d9032fdfb1460bae0d3f) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLoggerWithCommands.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java > Audit-log delegation token related operations > - > > Key: HDFS-12300 > URL: https://issues.apache.org/jira/browse/HDFS-12300 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 0.22.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12300.01.patch, HDFS-12300.02.patch > > > When inspecting the code, I found that the following methods in FSNamesystem > are not audit logged: > - getDelegationToken > - renewDelegationToken > - cancelDelegationToken > The audit log itself does have a logTokenTrackingId field to additionally log > some details when a token is used for authentication. > After emailing the community, we should add that. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1068) Reduce NameNode GC by reusing HdfsFileStatus objects in RPC handlers
[ https://issues.apache.org/jira/browse/HDFS-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-1068: Attachment: HDFS-1068.00.patch Attaching a very rough PoC patch to demonstrate idea. > Reduce NameNode GC by reusing HdfsFileStatus objects in RPC handlers > > > Key: HDFS-1068 > URL: https://issues.apache.org/jira/browse/HDFS-1068 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Hairong Kuang >Assignee: Zhe Zhang > Attachments: HDFS-1068.00.patch, Screen Shot 2017-08-31 at 3.58.15 > PM.png > > > In our production clusters, getFileInfo is the most frequent operation that > hit NameNode, and its frequency is highly correlated to the GC behavior. > HDFS-946 has already reduced the amount of heap/cpu and the number of > temporary objects for each getFileInfo call. Yet another improvement is to > avoid creation of a HdfsFileStatus object for each getFileInfo call. Instead > each RPC handler can have a thread local HdfsFileStatus object. Each > getFileInfo call simply sets values for all fields of the thread local > HdfsFileStatus object. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1068) Reduce NameNode GC by reusing HdfsFileStatus objects in RPC handlers
[ https://issues.apache.org/jira/browse/HDFS-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-1068: Status: Patch Available (was: Open) > Reduce NameNode GC by reusing HdfsFileStatus objects in RPC handlers > > > Key: HDFS-1068 > URL: https://issues.apache.org/jira/browse/HDFS-1068 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Hairong Kuang >Assignee: Zhe Zhang > Attachments: HDFS-1068.00.patch, Screen Shot 2017-08-31 at 3.58.15 > PM.png > > > In our production clusters, getFileInfo is the most frequent operation that > hit NameNode, and its frequency is highly correlated to the GC behavior. > HDFS-946 has already reduced the amount of heap/cpu and the number of > temporary objects for each getFileInfo call. Yet another improvement is to > avoid creation of a HdfsFileStatus object for each getFileInfo call. Instead > each RPC handler can have a thread local HdfsFileStatus object. Each > getFileInfo call simply sets values for all fields of the thread local > HdfsFileStatus object. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12380) Simplify dataQueue.wait condition logical operation in DataStreamer::run()
[ https://issues.apache.org/jira/browse/HDFS-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150116#comment-16150116 ] Hudson commented on HDFS-12380: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12294 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12294/]) HDFS-12380. Simplify dataQueue.wait condition logical operation in (liuml07: rev 36f33a1efb35e9f6986516499b54fdfa38fac2a1) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java > Simplify dataQueue.wait condition logical operation in DataStreamer::run() > -- > > Key: HDFS-12380 > URL: https://issues.apache.org/jira/browse/HDFS-12380 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 3.0.0-beta1 > Environment: cluster: 3 nodes > os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, > Ubuntu4.4.0-31-generic) > hadoop version: hadoop-3.0.0-beta1 > operation: Code review >Reporter: liaoyuxiangqin >Assignee: liaoyuxiangqin > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12380.001.patch, HDFS-12380.002.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When i read the run() of DataStream class in hdfs-client, i found the > following condition code could be more simplify and easy to understand. > {code:title=DataStreamer.java|borderStyle=solid} > // wait for a packet to be sent. > long now = Time.monotonicNow(); > while ((!shouldStop() && dataQueue.size() == 0 && > (stage != BlockConstructionStage.DATA_STREAMING || > stage == BlockConstructionStage.DATA_STREAMING && > now - lastPacket < halfSocketTimeout)) || doSleep ) { > {code} > as described above code segmet, i find the code of stage > !=DATA_STREAMING and stage==DATA_STREAMING appear at the same time in one > condition, so i think this condition logical not good understanding and > should simplify more. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12383) Re-encryption updater should handle canceled tasks better
[ https://issues.apache.org/jira/browse/HDFS-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150108#comment-16150108 ] Xiao Chen commented on HDFS-12383: -- Test failures not related to this patch. Not sure what's going on with pre-commit these days, we seem to get a whole lot of failures. Will commit this in 24 hours. Thanks [~jojochuang] for reviewing! > Re-encryption updater should handle canceled tasks better > - > > Key: HDFS-12383 > URL: https://issues.apache.org/jira/browse/HDFS-12383 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12383.01.patch, HDFS-12383.02.patch > > > Seen an instance where the re-encryption updater exited due to an exception, > and later tasks no longer executes. Logs below: > {noformat} > 2017-08-31 09:54:08,104 INFO > org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager: Zone > /tmp/encryption-zone-3(16819) is submitted for re-encryption. > 2017-08-31 09:54:08,104 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Executing > re-encrypt commands on zone 16819. Current zones:[zone:16787 state:Completed > lastProcessed:null filesReencrypted:1 fileReencryptionFailures:0][zone:16813 > state:Completed lastProcessed:null filesReencrypted:1 > fileReencryptionFailures:0][zone:16819 state:Submitted lastProcessed:null > filesReencrypted:0 fileReencryptionFailures:0] > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.protocol.ReencryptionStatus: Zone 16819 starts > re-encryption processing > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Re-encrypting > zone /tmp/encryption-zone-3(id=16819) > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Submitted batch > (start:/tmp/encryption-zone-3/data1, size:1) of zone 16819 to re-encrypt. > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Submission > completed of zone 16819 for re-encryption. > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Processing > batched re-encryption for zone 16819, batch size 1, > start:/tmp/encryption-zone-3/data1 > 2017-08-31 09:54:08,979 INFO BlockStateChange: BLOCK* BlockManager: ask > 172.26.1.71:20002 to delete [blk_1073742291_1467] > 2017-08-31 09:54:18,295 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater: Cancelling 1 > re-encryption tasks > 2017-08-31 09:54:18,295 INFO > org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager: Cancelled zone > /tmp/encryption-zone-3(16819) for re-encryption. > 2017-08-31 09:54:18,295 INFO > org.apache.hadoop.hdfs.protocol.ReencryptionStatus: Zone 16819 completed > re-encryption. > 2017-08-31 09:54:18,296 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Completed > re-encrypting one batch of 1 edeks from KMS, time consumed: 10.19 s, start: > /tmp/encryption-zone-3/data1. > 2017-08-31 09:54:18,296 ERROR > org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater: Re-encryption > updater thread exiting. > java.util.concurrent.CancellationException > at java.util.concurrent.FutureTask.report(FutureTask.java:121) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks(ReencryptionUpdater.java:404) > at > org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.run(ReencryptionUpdater.java:250) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > Updater should be fixed to handle canceled tasks better. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12300) Audit-log delegation token related operations
[ https://issues.apache.org/jira/browse/HDFS-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12300: - Fix Version/s: 3.0.0-beta1 > Audit-log delegation token related operations > - > > Key: HDFS-12300 > URL: https://issues.apache.org/jira/browse/HDFS-12300 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 0.22.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12300.01.patch, HDFS-12300.02.patch > > > When inspecting the code, I found that the following methods in FSNamesystem > are not audit logged: > - getDelegationToken > - renewDelegationToken > - cancelDelegationToken > The audit log itself does have a logTokenTrackingId field to additionally log > some details when a token is used for authentication. > After emailing the community, we should add that. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12300) Audit-log delegation token related operations
[ https://issues.apache.org/jira/browse/HDFS-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150104#comment-16150104 ] Xiao Chen commented on HDFS-12300: -- Committed this to trunk. Thanks Ravi for the review. What do people think about branch-2? The cherry pick isn't clean, but I can put up a branch-2 patch if that's considered valuable. > Audit-log delegation token related operations > - > > Key: HDFS-12300 > URL: https://issues.apache.org/jira/browse/HDFS-12300 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 0.22.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12300.01.patch, HDFS-12300.02.patch > > > When inspecting the code, I found that the following methods in FSNamesystem > are not audit logged: > - getDelegationToken > - renewDelegationToken > - cancelDelegationToken > The audit log itself does have a logTokenTrackingId field to additionally log > some details when a token is used for authentication. > After emailing the community, we should add that. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12380) Simplify dataQueue.wait condition logical operation in DataStreamer::run()
[ https://issues.apache.org/jira/browse/HDFS-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-12380: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) +1 Committed to {{trunk}} branch. Thanks for your contribution [~liaoyuxiangqin]. Thanks for your review [~shahrs87]. > Simplify dataQueue.wait condition logical operation in DataStreamer::run() > -- > > Key: HDFS-12380 > URL: https://issues.apache.org/jira/browse/HDFS-12380 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 3.0.0-beta1 > Environment: cluster: 3 nodes > os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, > Ubuntu4.4.0-31-generic) > hadoop version: hadoop-3.0.0-beta1 > operation: Code review >Reporter: liaoyuxiangqin >Assignee: liaoyuxiangqin > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12380.001.patch, HDFS-12380.002.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When i read the run() of DataStream class in hdfs-client, i found the > following condition code could be more simplify and easy to understand. > {code:title=DataStreamer.java|borderStyle=solid} > // wait for a packet to be sent. > long now = Time.monotonicNow(); > while ((!shouldStop() && dataQueue.size() == 0 && > (stage != BlockConstructionStage.DATA_STREAMING || > stage == BlockConstructionStage.DATA_STREAMING && > now - lastPacket < halfSocketTimeout)) || doSleep ) { > {code} > as described above code segmet, i find the code of stage > !=DATA_STREAMING and stage==DATA_STREAMING appear at the same time in one > condition, so i think this condition logical not good understanding and > should simplify more. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12380) Simplify dataQueue.wait condition logical operation in DataStreamer::run()
[ https://issues.apache.org/jira/browse/HDFS-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-12380: - Summary: Simplify dataQueue.wait condition logical operation in DataStreamer::run() (was: Simplify dataQueue.wait condition logical operation) > Simplify dataQueue.wait condition logical operation in DataStreamer::run() > -- > > Key: HDFS-12380 > URL: https://issues.apache.org/jira/browse/HDFS-12380 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 3.0.0-beta1 > Environment: cluster: 3 nodes > os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, > Ubuntu4.4.0-31-generic) > hadoop version: hadoop-3.0.0-beta1 > operation: Code review >Reporter: liaoyuxiangqin >Assignee: liaoyuxiangqin > Attachments: HDFS-12380.001.patch, HDFS-12380.002.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When i read the run() of DataStream class in hdfs-client, i found the > following condition code could be more simplify and easy to understand. > {code:title=DataStreamer.java|borderStyle=solid} > // wait for a packet to be sent. > long now = Time.monotonicNow(); > while ((!shouldStop() && dataQueue.size() == 0 && > (stage != BlockConstructionStage.DATA_STREAMING || > stage == BlockConstructionStage.DATA_STREAMING && > now - lastPacket < halfSocketTimeout)) || doSleep ) { > {code} > as described above code segmet, i find the code of stage > !=DATA_STREAMING and stage==DATA_STREAMING appear at the same time in one > condition, so i think this condition logical not good understanding and > should simplify more. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12383) Re-encryption updater should handle canceled tasks better
[ https://issues.apache.org/jira/browse/HDFS-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150098#comment-16150098 ] Hadoop QA commented on HDFS-12383: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}125m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}153m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDFSInputStream | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestQuota | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | |
[jira] [Commented] (HDFS-12363) Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages
[ https://issues.apache.org/jira/browse/HDFS-12363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150097#comment-16150097 ] Xiao Chen commented on HDFS-12363: -- Thanks a lot [~liuml07] for the review and commit! Also thanks [~jojochuang] for reviewing. > Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages > --- > > Key: HDFS-12363 > URL: https://issues.apache.org/jira/browse/HDFS-12363 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12363.01.patch, HDFS-12363.02.patch > > > Saw NN going down with NPE below: > {noformat} > ERROR org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Thread > received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.scanAndCompactStorages(BlockManager.java:3897) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:3852) > at java.lang.Thread.run(Thread.java:745) > 2017-08-21 22:14:05,303 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1 > 2017-08-21 22:14:05,313 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: > {noformat} > In that version, {{BlockManager}} code is: > {code} > 3896 try { > 3897 DatanodeStorageInfo storage = datanodeManager. > 3898 getDatanode(datanodesAndStorages.get(i)). > 3899getStorageInfo(datanodesAndStorages.get(i + 1)); > 3900if (storage != null) { > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12300) Audit-log delegation token related operations
[ https://issues.apache.org/jira/browse/HDFS-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150095#comment-16150095 ] Hadoop QA commented on HDFS-12300: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 250 unchanged - 1 fixed = 250 total (was 251) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestReadStripedFileWithDecoding | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | Timed out junit tests | org.apache.hadoop.hdfs.TestReplication | | | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12300 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12884824/HDFS-12300.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 09f0dbcae0c4 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 27359b7 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/20962/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20962/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-12376) Enable JournalNode Sync by default
[ https://issues.apache.org/jira/browse/HDFS-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150089#comment-16150089 ] Mingliang Liu commented on HDFS-12376: -- I've been watching progress of related scope and I'm +1 to enable JN sync by default. +1 on the patch. > Enable JournalNode Sync by default > -- > > Key: HDFS-12376 > URL: https://issues.apache.org/jira/browse/HDFS-12376 > Project: Hadoop HDFS > Issue Type: Task > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-12376.001.patch > > > All the tasks related to Journal Node sync (HDFS-4025) - HDFS-11448, > HDFS-11877, HDFS-11878, HDFS-11879, HDFS-12224, HDFS-12356 and HDFS-12358 are > resolved. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12374) Document the missing -ns option of haadmin.
[ https://issues.apache.org/jira/browse/HDFS-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150086#comment-16150086 ] Mingliang Liu commented on HDFS-12374: -- Lovely as the new format is, I think this JIRA major scope is invalid. The reason is that, {{haadmin}} was not supposed to be HDFS specific while the subclass {{DFSHAAdmin}} is. Option {{-ns}} is part of {{DFSHAAdmin}} instead of {{haadmin}} command. The current documentation for {{DFSHAAdmin}} is correct if you check the version from 2.7. A few related JIRAs can be checked at [HDFS-8067], [HDFS-7324] and [HDFS-7808]. [~brahmareddy] and [~szetszwo] to chime in? Thanks, > Document the missing -ns option of haadmin. > --- > > Key: HDFS-12374 > URL: https://issues.apache.org/jira/browse/HDFS-12374 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, federation >Affects Versions: 3.0.0-alpha4 >Reporter: Wenxin He >Assignee: Wenxin He >Priority: Minor > > Document the missing -ns option of haadmin in HDFSCommands.md, > HDFSHighAvailabilityWithQJM.md and HDFSHighAvailabilityWithNFS.md. > Before patch: > {noformat} > Usage: > hdfs haadmin -transitionToActive [--forceactive] > hdfs haadmin -transitionToStandby > hdfs haadmin -failover [--forcefence] [--forceactive] > > hdfs haadmin -getServiceState > hdfs haadmin -getAllServiceState > hdfs haadmin -checkHealth > hdfs haadmin -help > {noformat} > After patch: > {noformat} > Usage: haadmin [-ns ] > [-transitionToActive [--forceactive] ] > [-transitionToStandby ] > [-failover [--forcefence] [--forceactive] ] > [-getServiceState ] > [-getAllServiceState] > [-checkHealth ] > [-help ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
[ https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150084#comment-16150084 ] Hadoop QA commented on HDFS-12235: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 18s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 13 unchanged - 1 fixed = 13 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}153m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestBlockStoragePolicy | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.ozone.web.client.TestKeys | | | hadoop.ozone.ksm.TestKSMSQLCli | | Timed out junit tests | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12235 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12884859/HDFS-12235-HDFS-7240.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc xml | | uname | Linux 4a1d7a5211ff 3.13.0-123-generic #172-Ubuntu SMP
[jira] [Commented] (HDFS-12383) Re-encryption updater should handle canceled tasks better
[ https://issues.apache.org/jira/browse/HDFS-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150082#comment-16150082 ] Wei-Chiu Chuang commented on HDFS-12383: +1 pending Jenkins. thanks! > Re-encryption updater should handle canceled tasks better > - > > Key: HDFS-12383 > URL: https://issues.apache.org/jira/browse/HDFS-12383 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12383.01.patch, HDFS-12383.02.patch > > > Seen an instance where the re-encryption updater exited due to an exception, > and later tasks no longer executes. Logs below: > {noformat} > 2017-08-31 09:54:08,104 INFO > org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager: Zone > /tmp/encryption-zone-3(16819) is submitted for re-encryption. > 2017-08-31 09:54:08,104 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Executing > re-encrypt commands on zone 16819. Current zones:[zone:16787 state:Completed > lastProcessed:null filesReencrypted:1 fileReencryptionFailures:0][zone:16813 > state:Completed lastProcessed:null filesReencrypted:1 > fileReencryptionFailures:0][zone:16819 state:Submitted lastProcessed:null > filesReencrypted:0 fileReencryptionFailures:0] > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.protocol.ReencryptionStatus: Zone 16819 starts > re-encryption processing > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Re-encrypting > zone /tmp/encryption-zone-3(id=16819) > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Submitted batch > (start:/tmp/encryption-zone-3/data1, size:1) of zone 16819 to re-encrypt. > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Submission > completed of zone 16819 for re-encryption. > 2017-08-31 09:54:08,105 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Processing > batched re-encryption for zone 16819, batch size 1, > start:/tmp/encryption-zone-3/data1 > 2017-08-31 09:54:08,979 INFO BlockStateChange: BLOCK* BlockManager: ask > 172.26.1.71:20002 to delete [blk_1073742291_1467] > 2017-08-31 09:54:18,295 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater: Cancelling 1 > re-encryption tasks > 2017-08-31 09:54:18,295 INFO > org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager: Cancelled zone > /tmp/encryption-zone-3(16819) for re-encryption. > 2017-08-31 09:54:18,295 INFO > org.apache.hadoop.hdfs.protocol.ReencryptionStatus: Zone 16819 completed > re-encryption. > 2017-08-31 09:54:18,296 INFO > org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Completed > re-encrypting one batch of 1 edeks from KMS, time consumed: 10.19 s, start: > /tmp/encryption-zone-3/data1. > 2017-08-31 09:54:18,296 ERROR > org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater: Re-encryption > updater thread exiting. > java.util.concurrent.CancellationException > at java.util.concurrent.FutureTask.report(FutureTask.java:121) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks(ReencryptionUpdater.java:404) > at > org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.run(ReencryptionUpdater.java:250) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > Updater should be fixed to handle canceled tasks better. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12384) Fixing compilation issue with BanDuplicateClasses
[ https://issues.apache.org/jira/browse/HDFS-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150081#comment-16150081 ] Sean Busbey commented on HDFS-12384: Ah. The correct solution is to update the exclusions that determine which shaded artifact the new classes go into; they should only go into one. If a downstream client needs the curator classes in order to interact with HDFS, then they should only be in the hadoop-client-runtime (which would mean updating the pom for hadoop-client-minicluster to exclude them). You can see an example of excluding all of the curator-client from hadoop-client-minicluster: https://github.com/apache/hadoop/blob/ce797a170669524224cfeaaf70647047e7626816/hadoop-client-modules/hadoop-client-minicluster/pom.xml#L137 If you need to just exclude some specific classes then take a look at the set of filters: https://github.com/apache/hadoop/blob/ce797a170669524224cfeaaf70647047e7626816/hadoop-client-modules/hadoop-client-minicluster/pom.xml#L603 > Fixing compilation issue with BanDuplicateClasses > - > > Key: HDFS-12384 > URL: https://issues.apache.org/jira/browse/HDFS-12384 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12384-HDFS-10467-000.patch > > > {{hadoop-client-modules}} is failing because of dependences added by > {{CuratorManager}}: > {code} > [INFO] Adding ignore: * > [WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses > failed with message: > Duplicate classes found: > Found in: > > org.apache.hadoop:hadoop-client-minicluster:jar:3.0.0-beta1-SNAPSHOT:compile > org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT:compile > Duplicate classes: > > org/apache/hadoop/shaded/org/apache/curator/framework/api/DeleteBuilder.class > > org/apache/hadoop/shaded/org/apache/curator/framework/CuratorFramework.class > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org