[jira] [Commented] (HDFS-10862) Typos in 4 log messages
[ https://issues.apache.org/jira/browse/HDFS-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15495589#comment-15495589 ] Akira Ajisaka commented on HDFS-10862: -- LGTM, +1. The test failures looks unrelated. > Typos in 4 log messages > --- > > Key: HDFS-10862 > URL: https://issues.apache.org/jira/browse/HDFS-10862 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Mehran Hassani >Priority: Trivial > Labels: newbie > Attachments: HDFS-10862.001.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. Typos in log > messages are one of the reoccurring bugs. Therefore, I made a tool find typos > in log statements. During my experiments, I managed to find the following > typos in Hadoop HDFS: > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java, > FsDatasetImpl.LOG.info("The volume " + v + " is closed while " +"addng > replicas ignored."), > addng should be adding > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java, > NameNode.LOG.info("Caching file names occuring more than " + threshold+ " > times"), > occuring should be occurring > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java, > LOG.info("NNStorage.attemptRestoreRemovedStorage: check removed(failed) > "+"storarge. removedStorages size = " + removedStorageDirs.size()), > storarge should be storage > In file > /hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java, > LOG.info("Partical read. Asked offset: " + offset + " count: " + count+ " > and read back: " + readCount + " file size: "+ attrs.getSize()), > Partical should be Partial -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10862) Typos in 4 log messages
[ https://issues.apache.org/jira/browse/HDFS-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15495591#comment-15495591 ] Yiqun Lin commented on HDFS-10862: -- Thanks [~MehranHassani] for working on this. Your patch looks good to me, +1. One of them {{TestPendingInvalidateBlock}} was tracked by HDFS-10426, and the other one is not related. > Typos in 4 log messages > --- > > Key: HDFS-10862 > URL: https://issues.apache.org/jira/browse/HDFS-10862 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Mehran Hassani >Priority: Trivial > Labels: newbie > Attachments: HDFS-10862.001.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. Typos in log > messages are one of the reoccurring bugs. Therefore, I made a tool find typos > in log statements. During my experiments, I managed to find the following > typos in Hadoop HDFS: > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java, > FsDatasetImpl.LOG.info("The volume " + v + " is closed while " +"addng > replicas ignored."), > addng should be adding > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java, > NameNode.LOG.info("Caching file names occuring more than " + threshold+ " > times"), > occuring should be occurring > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java, > LOG.info("NNStorage.attemptRestoreRemovedStorage: check removed(failed) > "+"storarge. removedStorages size = " + removedStorageDirs.size()), > storarge should be storage > In file > /hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java, > LOG.info("Partical read. Asked offset: " + offset + " count: " + count+ " > and read back: " + readCount + " file size: "+ attrs.getSize()), > Partical should be Partial -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10862) Typos in 4 log messages
[ https://issues.apache.org/jira/browse/HDFS-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-10862: - Assignee: Mehran Hassani > Typos in 4 log messages > --- > > Key: HDFS-10862 > URL: https://issues.apache.org/jira/browse/HDFS-10862 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Mehran Hassani >Assignee: Mehran Hassani >Priority: Trivial > Labels: newbie > Attachments: HDFS-10862.001.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. Typos in log > messages are one of the reoccurring bugs. Therefore, I made a tool find typos > in log statements. During my experiments, I managed to find the following > typos in Hadoop HDFS: > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java, > FsDatasetImpl.LOG.info("The volume " + v + " is closed while " +"addng > replicas ignored."), > addng should be adding > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java, > NameNode.LOG.info("Caching file names occuring more than " + threshold+ " > times"), > occuring should be occurring > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java, > LOG.info("NNStorage.attemptRestoreRemovedStorage: check removed(failed) > "+"storarge. removedStorages size = " + removedStorageDirs.size()), > storarge should be storage > In file > /hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java, > LOG.info("Partical read. Asked offset: " + offset + " count: " + count+ " > and read back: " + readCount + " file size: "+ attrs.getSize()), > Partical should be Partial -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10862) Typos in 4 log messages
[ https://issues.apache.org/jira/browse/HDFS-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-10862: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.8.0 Status: Resolved (was: Patch Available) Committed this to trunk, branch-2, and branch-2.8. Thanks [~MehranHassani] for the contribution and thanks [~linyiqun] for the review. > Typos in 4 log messages > --- > > Key: HDFS-10862 > URL: https://issues.apache.org/jira/browse/HDFS-10862 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Mehran Hassani >Assignee: Mehran Hassani >Priority: Trivial > Labels: newbie > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-10862.001.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. Typos in log > messages are one of the reoccurring bugs. Therefore, I made a tool find typos > in log statements. During my experiments, I managed to find the following > typos in Hadoop HDFS: > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java, > FsDatasetImpl.LOG.info("The volume " + v + " is closed while " +"addng > replicas ignored."), > addng should be adding > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java, > NameNode.LOG.info("Caching file names occuring more than " + threshold+ " > times"), > occuring should be occurring > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java, > LOG.info("NNStorage.attemptRestoreRemovedStorage: check removed(failed) > "+"storarge. removedStorages size = " + removedStorageDirs.size()), > storarge should be storage > In file > /hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java, > LOG.info("Partical read. Asked offset: " + offset + " count: " + count+ " > and read back: " + readCount + " file size: "+ attrs.getSize()), > Partical should be Partial -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10862) Typos in 4 log messages
[ https://issues.apache.org/jira/browse/HDFS-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15495617#comment-15495617 ] Hudson commented on HDFS-10862: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10447 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10447/]) HDFS-10862. Typos in 4 log messages. Contributed by Mehran Hassani. (aajisaka: rev b09a03cd7d26cf96ec26a81ba11f00778241eb3e) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * (edit) hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java > Typos in 4 log messages > --- > > Key: HDFS-10862 > URL: https://issues.apache.org/jira/browse/HDFS-10862 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Mehran Hassani >Assignee: Mehran Hassani >Priority: Trivial > Labels: newbie > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-10862.001.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. Typos in log > messages are one of the reoccurring bugs. Therefore, I made a tool find typos > in log statements. During my experiments, I managed to find the following > typos in Hadoop HDFS: > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java, > FsDatasetImpl.LOG.info("The volume " + v + " is closed while " +"addng > replicas ignored."), > addng should be adding > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java, > NameNode.LOG.info("Caching file names occuring more than " + threshold+ " > times"), > occuring should be occurring > In file > /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java, > LOG.info("NNStorage.attemptRestoreRemovedStorage: check removed(failed) > "+"storarge. removedStorages size = " + removedStorageDirs.size()), > storarge should be storage > In file > /hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java, > LOG.info("Partical read. Asked offset: " + offset + " count: " + count+ " > and read back: " + readCount + " file size: "+ attrs.getSize()), > Partical should be Partial -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10866) Fix Eclipse Java 8 compile errors related to generic parameters.
[ https://issues.apache.org/jira/browse/HDFS-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-10866: - Attachment: IntelliJ.png I tried your patch and got the following error in IntelliJ IDEA 2016.2.4. !IntelliJ.png! How about using {{verify(fs, never()).setDelegationToken((Token)any())}} instead? > Fix Eclipse Java 8 compile errors related to generic parameters. > > > Key: HDFS-10866 > URL: https://issues.apache.org/jira/browse/HDFS-10866 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko > Attachments: HDFS-10866.01.patch, IntelliJ.png > > > Compilation with Java 8 in Eclipse returns errors, which are related to the > use of generics. This does not effect command line maven builds and is > confirmed to be a [bug in > Eclipse|https://bugs.eclipse.org/bugs/show_bug.cgi?id=497905#c1].The fix is > scheduled only for the next release, so all of us using Eclipse now will have > that error. > Unless we fix it in Hadoop code, which makes sense to me as it appears as a > warning in any case. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
[ https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15495813#comment-15495813 ] Hadoop QA commented on HDFS-10489: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 38s{color} | {color:green} root: The patch generated 0 new + 808 unchanged - 6 fixed = 808 total (was 814) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 1s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 36s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestFileCreationDelete | | | hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits | | | hadoop.tools.TestHdfsConfigFields | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10489 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828800/HDFS-10489.05.patch | | Opti
[jira] [Created] (HDFS-10867) Block Bit Field Allocation of Provided Storage
Ewan Higgs created HDFS-10867: - Summary: Block Bit Field Allocation of Provided Storage Key: HDFS-10867 URL: https://issues.apache.org/jira/browse/HDFS-10867 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs Reporter: Ewan Higgs We wish to design and implement the following related features for provided storage: # Dynamic mounting of provided storage within a Namenode (mount, unmount) # Mount multiple provided storage systems on a single Namenode. # Support updates to the provided storage system without having to regenerate an fsimg. A mount in the namespace addresses a corresponding set of block data. When unmounted, any block data associated with the mount becomes invalid and (eventually) unaddressable in HDFS. As with erasure-coded blocks, efficient unmounting requires that all blocks with that attribute be identifiable by the block management layer In this subtask, we focus on changes and conventions to the block management layer. Namespace operations are covered in a separate subtask. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10867) Block Bit Field Allocation of Provided Storage
[ https://issues.apache.org/jira/browse/HDFS-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-10867: -- Attachment: Block Bit Field Allocation of Provided Storage.docx > Block Bit Field Allocation of Provided Storage > -- > > Key: HDFS-10867 > URL: https://issues.apache.org/jira/browse/HDFS-10867 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Ewan Higgs > Attachments: Block Bit Field Allocation of Provided Storage.docx > > > We wish to design and implement the following related features for provided > storage: > # Dynamic mounting of provided storage within a Namenode (mount, unmount) > # Mount multiple provided storage systems on a single Namenode. > # Support updates to the provided storage system without having to regenerate > an fsimg. > A mount in the namespace addresses a corresponding set of block data. When > unmounted, any block data associated with the mount becomes invalid and > (eventually) unaddressable in HDFS. As with erasure-coded blocks, efficient > unmounting requires that all blocks with that attribute be identifiable by > the block management layer > In this subtask, we focus on changes and conventions to the block management > layer. Namespace operations are covered in a separate subtask. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10867) Block Bit Field Allocation of Provided Storage
[ https://issues.apache.org/jira/browse/HDFS-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-10867: -- Attachment: Block Bit Field Allocation of Provided Storage.pdf > Block Bit Field Allocation of Provided Storage > -- > > Key: HDFS-10867 > URL: https://issues.apache.org/jira/browse/HDFS-10867 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Ewan Higgs > Attachments: Block Bit Field Allocation of Provided Storage.pdf > > > We wish to design and implement the following related features for provided > storage: > # Dynamic mounting of provided storage within a Namenode (mount, unmount) > # Mount multiple provided storage systems on a single Namenode. > # Support updates to the provided storage system without having to regenerate > an fsimg. > A mount in the namespace addresses a corresponding set of block data. When > unmounted, any block data associated with the mount becomes invalid and > (eventually) unaddressable in HDFS. As with erasure-coded blocks, efficient > unmounting requires that all blocks with that attribute be identifiable by > the block management layer > In this subtask, we focus on changes and conventions to the block management > layer. Namespace operations are covered in a separate subtask. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10867) Block Bit Field Allocation of Provided Storage
[ https://issues.apache.org/jira/browse/HDFS-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-10867: -- Attachment: (was: Block Bit Field Allocation of Provided Storage.docx) > Block Bit Field Allocation of Provided Storage > -- > > Key: HDFS-10867 > URL: https://issues.apache.org/jira/browse/HDFS-10867 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Ewan Higgs > Attachments: Block Bit Field Allocation of Provided Storage.pdf > > > We wish to design and implement the following related features for provided > storage: > # Dynamic mounting of provided storage within a Namenode (mount, unmount) > # Mount multiple provided storage systems on a single Namenode. > # Support updates to the provided storage system without having to regenerate > an fsimg. > A mount in the namespace addresses a corresponding set of block data. When > unmounted, any block data associated with the mount becomes invalid and > (eventually) unaddressable in HDFS. As with erasure-coded blocks, efficient > unmounting requires that all blocks with that attribute be identifiable by > the block management layer > In this subtask, we focus on changes and conventions to the block management > layer. Namespace operations are covered in a separate subtask. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10797) Disk usage summary of snapshots causes renamed blocks to get counted twice
[ https://issues.apache.org/jira/browse/HDFS-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HDFS-10797: - Attachment: HDFS-10797.003.patch Thanks for the review, [~xiaochen]. I do think that test is worth adding. Attaching a patch with that (it passes) and a timeout. > Disk usage summary of snapshots causes renamed blocks to get counted twice > -- > > Key: HDFS-10797 > URL: https://issues.apache.org/jira/browse/HDFS-10797 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HDFS-10797.001.patch, HDFS-10797.002.patch, > HDFS-10797.003.patch > > > DirectoryWithSnapshotFeature.computeContentSummary4Snapshot calculates how > much disk usage is used by a snapshot by tallying up the files in the > snapshot that have since been deleted (that way it won't overlap with regular > files whose disk usage is computed separately). However that is determined > from a diff that shows moved (to Trash or otherwise) or renamed files as a > deletion and a creation operation that may overlap with the list of blocks. > Only the deletion operation is taken into consideration, and this causes > those blocks to get represented twice in the disk usage tallying. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10865) Datanodemanager adds nodes twice to NetworkTopology
[ https://issues.apache.org/jira/browse/HDFS-10865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496597#comment-15496597 ] Rushabh S Shah commented on HDFS-10865: --- I don't think that is the case. {code:title=DatanodeManager.java|borderStyle=solid} // Some comments here public void registerDatanode(DatanodeRegistration nodeReg) { ... DatanodeDescriptor nodeS = getDatanode(nodeReg.getDatanodeUuid()); if (nodeS != null) { ... getNetworkTopology().add(nodeS); ... return; } DatanodeDescriptor nodeDescr = new DatanodeDescriptor(nodeReg, NetworkTopology.DEFAULT_RACK); ... networktopology.add(nodeDescr); } {code} So {{getNetworkTopology().add(nodeS);}} is only called if {{nodeS != null}} At the end of the if condition, it will return. [~elgoiri]: please correct me if my understanding is wrong. > Datanodemanager adds nodes twice to NetworkTopology > --- > > Key: HDFS-10865 > URL: https://issues.apache.org/jira/browse/HDFS-10865 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.3 >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-10865.000.patch > > > {{DatanodeManager}} tries to add datanodes to the {{NetworkTopology}} twice > in {{registerDatanode()}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10865) Datanodemanager adds nodes twice to NetworkTopology
[ https://issues.apache.org/jira/browse/HDFS-10865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496606#comment-15496606 ] Inigo Goiri commented on HDFS-10865: Pinging [~cmccabe], [~atm], and [~djp] as they were involved in HDFS-4521. > Datanodemanager adds nodes twice to NetworkTopology > --- > > Key: HDFS-10865 > URL: https://issues.apache.org/jira/browse/HDFS-10865 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.3 >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-10865.000.patch > > > {{DatanodeManager}} tries to add datanodes to the {{NetworkTopology}} twice > in {{registerDatanode()}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-10630) Federation State Store
[ https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri reassigned HDFS-10630: -- Assignee: Jason Kace > Federation State Store > -- > > Key: HDFS-10630 > URL: https://issues.apache.org/jira/browse/HDFS-10630 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Inigo Goiri >Assignee: Jason Kace > Attachments: HDFS-10630.001.patch > > > Interface to store the federation shared state across Routers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10630) Federation State Store
[ https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HDFS-10630: --- Status: In Progress (was: Patch Available) > Federation State Store > -- > > Key: HDFS-10630 > URL: https://issues.apache.org/jira/browse/HDFS-10630 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Inigo Goiri >Assignee: Jason Kace > Attachments: HDFS-10630.001.patch > > > Interface to store the federation shared state across Routers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10865) Datanodemanager adds nodes twice to NetworkTopology
[ https://issues.apache.org/jira/browse/HDFS-10865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496639#comment-15496639 ] Rushabh S Shah commented on HDFS-10865: --- You are absolutely right.. my bad..I should have seen the code properly before commenting. > Datanodemanager adds nodes twice to NetworkTopology > --- > > Key: HDFS-10865 > URL: https://issues.apache.org/jira/browse/HDFS-10865 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.3 >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-10865.000.patch > > > {{DatanodeManager}} tries to add datanodes to the {{NetworkTopology}} twice > in {{registerDatanode()}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-10687) Federation Membership State Store internal APIs
[ https://issues.apache.org/jira/browse/HDFS-10687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri reassigned HDFS-10687: -- Assignee: Jason Kace (was: Inigo Goiri) > Federation Membership State Store internal APIs > --- > > Key: HDFS-10687 > URL: https://issues.apache.org/jira/browse/HDFS-10687 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Inigo Goiri >Assignee: Jason Kace > > The Federation Membership State encapsulates the information about the > Namenodes of each sub-cluster that are participating in Federation. The > information includes addresses for RPC, Web. This information is stored in > the State Store and later used by the Router to find data in the federation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10865) Datanodemanager adds nodes twice to NetworkTopology
[ https://issues.apache.org/jira/browse/HDFS-10865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496634#comment-15496634 ] Inigo Goiri commented on HDFS-10865: [~shahrs87], that's the code path to update a node that was already in the cluster ({{nodeS != null}}). However, if this is a fresh node, we first execute line 1051 (the one in your comment): This is the code: {code} public void registerDatanode(DatanodeRegistration nodeReg) { ... DatanodeDescriptor nodeS = getDatanode(nodeReg.getDatanodeUuid()); if (nodeS != null) { ... getNetworkTopology().add(nodeS); ... return; } DatanodeDescriptor nodeDescr = new DatanodeDescriptor(nodeReg, NetworkTopology.DEFAULT_RACK); ... networktopology.add(nodeDescr); nodeDescr.setSoftwareVersion(nodeReg.getSoftwareVersion()); resolveUpgradeDomain(nodeDescr); // register new datanode addDatanode(nodeDescr); ... } void addDatanode(final DatanodeDescriptor node) { ... networktopology.add(node); // may throw InvalidTopologyException host2DatanodeMap.add(node); checkIfClusterIsNowMultiRack(node); resolveUpgradeDomain(node); ... } {code} > Datanodemanager adds nodes twice to NetworkTopology > --- > > Key: HDFS-10865 > URL: https://issues.apache.org/jira/browse/HDFS-10865 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.3 >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-10865.000.patch > > > {{DatanodeManager}} tries to add datanodes to the {{NetworkTopology}} twice > in {{registerDatanode()}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10630) Federation State Store
[ https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HDFS-10630: --- Status: Patch Available (was: Open) > Federation State Store > -- > > Key: HDFS-10630 > URL: https://issues.apache.org/jira/browse/HDFS-10630 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Inigo Goiri > Attachments: HDFS-10630.001.patch > > > Interface to store the federation shared state across Routers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9895) Remove unnecessary conf cache from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496875#comment-15496875 ] Xiaobing Zhou commented on HDFS-9895: - Thank you [~arpitagarwal] for committing it. Branch-2 patch v003 is posted. > Remove unnecessary conf cache from DataNode > --- > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9895-HDFS-9000-branch-2.003.patch, > HDFS-9895-HDFS-9000.002.patch, HDFS-9895-HDFS-9000.003.patch, > HDFS-9895.000.patch, HDFS-9895.001.patch > > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, DataNode#conf should be removed for the > purpose of brevity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10794) [SPS]: Provide storage policy satisfy worker at DN for co-ordinating the block storage movement work
[ https://issues.apache.org/jira/browse/HDFS-10794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496876#comment-15496876 ] Rakesh R commented on HDFS-10794: - Thank you [~drankye] for the help in reviews & commit. > [SPS]: Provide storage policy satisfy worker at DN for co-ordinating the > block storage movement work > > > Key: HDFS-10794 > URL: https://issues.apache.org/jira/browse/HDFS-10794 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Rakesh R >Assignee: Rakesh R > Fix For: HDFS-10285 > > Attachments: HDFS-10794-00.patch, HDFS-10794-HDFS-10285.00.patch, > HDFS-10794-HDFS-10285.01.patch, HDFS-10794-HDFS-10285.02.patch, > HDFS-10794-HDFS-10285.03.patch > > > The idea of this jira is to implement a mechanism to move the blocks to the > given target in order to satisfy the block storage policy. Datanode receives > {{blocktomove}} details via heart beat response from NN. More specifically, > its a datanode side extension to handle the block storage movement commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-9895) Remove unnecessary conf cache from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal reopened HDFS-9895: - > Remove unnecessary conf cache from DataNode > --- > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9895-HDFS-9000-branch-2.003.patch, > HDFS-9895-HDFS-9000.002.patch, HDFS-9895-HDFS-9000.003.patch, > HDFS-9895.000.patch, HDFS-9895.001.patch > > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, DataNode#conf should be removed for the > purpose of brevity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9895) Remove unnecessary conf cache from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-9895: Status: Patch Available (was: Reopened) > Remove unnecessary conf cache from DataNode > --- > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9895-HDFS-9000-branch-2.003.patch, > HDFS-9895-HDFS-9000.002.patch, HDFS-9895-HDFS-9000.003.patch, > HDFS-9895.000.patch, HDFS-9895.001.patch > > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, DataNode#conf should be removed for the > purpose of brevity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10797) Disk usage summary of snapshots causes renamed blocks to get counted twice
[ https://issues.apache.org/jira/browse/HDFS-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496878#comment-15496878 ] Hadoop QA commented on HDFS-10797: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.TestCrcCorruption | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10797 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828842/HDFS-10797.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 31121d442c83 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b09a03c | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16770/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16770/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16770/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Disk usage summary of snapshots causes renamed blocks to get counted twice > -- > > Key: HDFS-10797 > URL: https://issue
[jira] [Updated] (HDFS-9895) Remove unnecessary conf cache from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9895: Attachment: HDFS-9895-HDFS-9000-branch-2.003.patch > Remove unnecessary conf cache from DataNode > --- > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9895-HDFS-9000-branch-2.003.patch, > HDFS-9895-HDFS-9000.002.patch, HDFS-9895-HDFS-9000.003.patch, > HDFS-9895.000.patch, HDFS-9895.001.patch > > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, DataNode#conf should be removed for the > purpose of brevity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9895) Remove unnecessary conf cache from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496897#comment-15496897 ] Hadoop QA commented on HDFS-9895: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HDFS-9895 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-9895 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828860/HDFS-9895-HDFS-9000-branch-2.003.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16771/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Remove unnecessary conf cache from DataNode > --- > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9895-HDFS-9000-branch-2.003.patch, > HDFS-9895-HDFS-9000.002.patch, HDFS-9895-HDFS-9000.003.patch, > HDFS-9895.000.patch, HDFS-9895.001.patch > > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, DataNode#conf should be removed for the > purpose of brevity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.
[ https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-10810: Description: 1)Batch IBR is enabled with number of committed blocks allowed=1 2) Written one block and closed the file without waiting for IBR 3)Setreplication called immediately on the file. So till the finalized IBR Received, block will not be added to neededReconstruction {code} if (isNeededReconstruction(block, repl.liveReplicas())) { neededReconstruction.update(block, repl.liveReplicas(), repl.readOnlyReplicas(), repl.decommissionedAndDecommissioning(), curExpectedReplicas, curReplicasDelta, expectedReplicasDelta); }.{code} was: 1)Batch IBR is enabled with number of committed blocks allowed=1 2) Written one block and closed the file without waiting for IBR 3)Setreplication called immediately on the file. So till the finalized IBR Received, this block will be marked as corrupt. > Setreplication removing block from underconstrcution temporarily when batch > IBR is enabled. > > > Key: HDFS-10810 > URL: https://issues.apache.org/jira/browse/HDFS-10810 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-10810-002.patch, HDFS-10810.patch > > > 1)Batch IBR is enabled with number of committed blocks allowed=1 > 2) Written one block and closed the file without waiting for IBR > 3)Setreplication called immediately on the file. > So till the finalized IBR Received, block will not be added to > neededReconstruction > {code} > if (isNeededReconstruction(block, repl.liveReplicas())) { > neededReconstruction.update(block, repl.liveReplicas(), > repl.readOnlyReplicas(), repl.decommissionedAndDecommissioning(), > curExpectedReplicas, curReplicasDelta, expectedReplicasDelta); > }.{code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9895) Remove unnecessary conf cache from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9895: Attachment: HDFS-9895-branch-2.003.patch > Remove unnecessary conf cache from DataNode > --- > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9895-HDFS-9000.002.patch, > HDFS-9895-HDFS-9000.003.patch, HDFS-9895-branch-2.003.patch, > HDFS-9895.000.patch, HDFS-9895.001.patch > > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, DataNode#conf should be removed for the > purpose of brevity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9895) Remove unnecessary conf cache from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9895: Attachment: (was: HDFS-9895-HDFS-9000-branch-2.003.patch) > Remove unnecessary conf cache from DataNode > --- > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9895-HDFS-9000.002.patch, > HDFS-9895-HDFS-9000.003.patch, HDFS-9895-branch-2.003.patch, > HDFS-9895.000.patch, HDFS-9895.001.patch > > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, DataNode#conf should be removed for the > purpose of brevity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-6532) Intermittent test failure org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt
[ https://issues.apache.org/jira/browse/HDFS-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-6532: Attachment: PreCommit-HDFS-Build #16770 test - testCorruptionDuringWrt [Jenkins].pdf I haven't managed to look more into this, but attaching a failed precommit log as of today. > Intermittent test failure > org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt > -- > > Key: HDFS-6532 > URL: https://issues.apache.org/jira/browse/HDFS-6532 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client >Affects Versions: 2.4.0 >Reporter: Yongjun Zhang >Assignee: Yiqun Lin > Attachments: HDFS-6532.001.patch, PreCommit-HDFS-Build #16770 test - > testCorruptionDuringWrt [Jenkins].pdf, > TEST-org.apache.hadoop.hdfs.TestCrcCorruption.xml > > > Per https://builds.apache.org/job/Hadoop-Hdfs-trunk/1774/testReport, we had > the following failure. Local rerun is successful > {code} > Regression > org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt > Failing for the past 1 build (Since Failed#1774 ) > Took 50 sec. > Error Message > test timed out after 5 milliseconds > Stacktrace > java.lang.Exception: test timed out after 5 milliseconds > at java.lang.Object.wait(Native Method) > at > org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2024) > at > org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2008) > at > org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2107) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:98) > at > org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:133) > {code} > See relevant exceptions in log > {code} > 2014-06-14 11:56:15,283 WARN datanode.DataNode > (BlockReceiver.java:verifyChunks(404)) - Checksum error in block > BP-1675558312-67.195.138.30-1402746971712:blk_1073741825_1001 from > /127.0.0.1:41708 > org.apache.hadoop.fs.ChecksumException: Checksum error: > DFSClient_NONMAPREDUCE_-1139495951_8 at 64512 exp: 1379611785 got: -12163112 > at > org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:353) > at > org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:284) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:402) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:537) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:734) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:741) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:234) > at java.lang.Thread.run(Thread.java:662) > 2014-06-14 11:56:15,285 WARN datanode.DataNode > (BlockReceiver.java:run(1207)) - IOException in BlockReceiver.run(): > java.io.IOException: Shutting down writer and responder due to a checksum > error in received data. The error response has been sent upstream. > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1352) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1278) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1199) > at java.lang.Thread.run(Thread.java:662) > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.
[ https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496965#comment-15496965 ] Brahma Reddy Battula commented on HDFS-10810: - Scenario is like a) Enable batchIBR and write one block and close file b) Call setrep on this file c) As block is only committed,{{isNeededReconstruction}} will be false and block will not updated in {{neededReconstruction}} hence this will not marked as underreplicated. > Setreplication removing block from underconstrcution temporarily when batch > IBR is enabled. > > > Key: HDFS-10810 > URL: https://issues.apache.org/jira/browse/HDFS-10810 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-10810-002.patch, HDFS-10810.patch > > > 1)Batch IBR is enabled with number of committed blocks allowed=1 > 2) Written one block and closed the file without waiting for IBR > 3)Setreplication called immediately on the file. > So till the finalized IBR Received, block will not be added to > neededReconstruction > {code} > if (isNeededReconstruction(block, repl.liveReplicas())) { > neededReconstruction.update(block, repl.liveReplicas(), > repl.readOnlyReplicas(), repl.decommissionedAndDecommissioning(), > curExpectedReplicas, curReplicasDelta, expectedReplicasDelta); > }.{code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.
[ https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-10810: Description: 1)Batch IBR is enabled with number of committed blocks allowed=1 2) Written one block and closed the file without waiting for IBR 3)Setreplication called immediately on the file. So till the finalized IBR Received, block will not be added to {{neededReconstruction}} since following check will be {{false}} as block is not marked as complete. {code} if (isNeededReconstruction(block, repl.liveReplicas())) { neededReconstruction.update(block, repl.liveReplicas(), repl.readOnlyReplicas(), repl.decommissionedAndDecommissioning(), curExpectedReplicas, curReplicasDelta, expectedReplicasDelta); }.{code} Hence block will not marked as under-replicated. was: 1)Batch IBR is enabled with number of committed blocks allowed=1 2) Written one block and closed the file without waiting for IBR 3)Setreplication called immediately on the file. So till the finalized IBR Received, block will not be added to neededReconstruction {code} if (isNeededReconstruction(block, repl.liveReplicas())) { neededReconstruction.update(block, repl.liveReplicas(), repl.readOnlyReplicas(), repl.decommissionedAndDecommissioning(), curExpectedReplicas, curReplicasDelta, expectedReplicasDelta); }.{code} > Setreplication removing block from underconstrcution temporarily when batch > IBR is enabled. > > > Key: HDFS-10810 > URL: https://issues.apache.org/jira/browse/HDFS-10810 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-10810-002.patch, HDFS-10810.patch > > > 1)Batch IBR is enabled with number of committed blocks allowed=1 > 2) Written one block and closed the file without waiting for IBR > 3)Setreplication called immediately on the file. > So till the finalized IBR Received, block will not be added to > {{neededReconstruction}} since following check will be {{false}} as block is > not marked as complete. > {code} > if (isNeededReconstruction(block, repl.liveReplicas())) { > neededReconstruction.update(block, repl.liveReplicas(), > repl.readOnlyReplicas(), repl.decommissionedAndDecommissioning(), > curExpectedReplicas, curReplicasDelta, expectedReplicasDelta); > }.{code} > Hence block will not marked as under-replicated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10797) Disk usage summary of snapshots causes renamed blocks to get counted twice
[ https://issues.apache.org/jira/browse/HDFS-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496960#comment-15496960 ] Xiao Chen commented on HDFS-10797: -- Thanks Sean for revving! Patch 3 looks great to me. Hi [~yzhangal] and [~jingzhao], could you please take a final look since you're more familiar with snapshots? Thanks in advance. > Disk usage summary of snapshots causes renamed blocks to get counted twice > -- > > Key: HDFS-10797 > URL: https://issues.apache.org/jira/browse/HDFS-10797 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HDFS-10797.001.patch, HDFS-10797.002.patch, > HDFS-10797.003.patch > > > DirectoryWithSnapshotFeature.computeContentSummary4Snapshot calculates how > much disk usage is used by a snapshot by tallying up the files in the > snapshot that have since been deleted (that way it won't overlap with regular > files whose disk usage is computed separately). However that is determined > from a diff that shows moved (to Trash or otherwise) or renamed files as a > deletion and a creation operation that may overlap with the list of blocks. > Only the deletion operation is taken into consideration, and this causes > those blocks to get represented twice in the disk usage tallying. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10713) Throttle FsNameSystem lock warnings
[ https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-10713: -- Attachment: HDFS-10713.008.patch Fixed Checkstyle and Unit test errors > Throttle FsNameSystem lock warnings > --- > > Key: HDFS-10713 > URL: https://issues.apache.org/jira/browse/HDFS-10713 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging, namenode >Reporter: Arpit Agarwal >Assignee: Hanisha Koneru > Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, > HDFS-10713.002.patch, HDFS-10713.003.patch, HDFS-10713.004.patch, > HDFS-10713.005.patch, HDFS-10713.006.patch, HDFS-10713.007.patch, > HDFS-10713.008.patch > > > The NameNode logs a message if the FSNamesystem write lock is held by a > thread for over 1 second. These messages can be throttled to at one most one > per x minutes to avoid potentially filling up NN logs. We can also log the > number of suppressed notices since the last log message. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-9480: --- Summary: Expose nonDfsUsed via StorageTypeStats (was: Expose nonDfsUsed via StorageTypeStats and DatanodeStatistics) > Expose nonDfsUsed via StorageTypeStats > > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-9480: --- Status: Patch Available (was: Open) > Expose nonDfsUsed via StorageTypeStats > > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496999#comment-15496999 ] Brahma Reddy Battula commented on HDFS-9480: uploaded the patch..[~arpitagarwal] kindly review.. > Expose nonDfsUsed via StorageTypeStats > > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-9480: --- Description: Expose nonDfsUsed via StorageTypeStats > Expose nonDfsUsed via StorageTypeStats > > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480.patch > > > Expose nonDfsUsed via StorageTypeStats -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats and DatanodeStatistics
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-9480: --- Attachment: HDFS-9480.patch > Expose nonDfsUsed via StorageTypeStats and DatanodeStatistics > -- > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-9480: --- Description: Expose nonDfsUsed via StorageTypeStats..See the discussion [here | https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761] from arpit. (was: Expose nonDfsUsed via StorageTypeStats..See the discussion[here | https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761] from arpit. ) > Expose nonDfsUsed via StorageTypeStats > > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480.patch > > > Expose nonDfsUsed via StorageTypeStats..See the discussion [here | > https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761] > from arpit. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-9480: --- Description: Expose nonDfsUsed via StorageTypeStats..See the discussion[here | https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761] from arpit. (was: Expose nonDfsUsed via StorageTypeStats ) > Expose nonDfsUsed via StorageTypeStats > > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480.patch > > > Expose nonDfsUsed via StorageTypeStats..See the discussion[here | > https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761] > from arpit. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-9480: --- Description: Expose nonDfsUsed via StorageTypeStats..See the comment [here | https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761] from arpit. (was: Expose nonDfsUsed via StorageTypeStats..See the discussion [here | https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761] from arpit. ) > Expose nonDfsUsed via StorageTypeStats > > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480.patch > > > Expose nonDfsUsed via StorageTypeStats..See the comment [here | > https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761] > from arpit. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497051#comment-15497051 ] Chris Nauroth commented on HDFS-10824: -- [~xiaobingo], thank you for the patch. It appears that at least one of the test failures, {{TestFsDatasetImpl}}, was caused by patch revision 003. That test passes for me on current trunk, and then it times out after I apply patch 003. I didn't fully investigate root cause. However, I did run jstack on the JUnit process to see what was happening. I've pasted the relevant stack trace for the main thread below. After restarting the mini-cluster, the thread is blocked while trying to trigger a heartbeat. Perhaps something in the patch has impacted reinitialization after DataNode restart, such as delivery of the initial block report. I hope this helps with investigation. {code} "main" #1 prio=5 os_prio=31 tid=0x7fee83801800 nid=0x1703 in Object.wait() [0x70218000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.triggerHeartbeatForTests(BPServiceActor.java:310) - locked <0x00079ae302c0> (a org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.triggerHeartbeatForTests(BPOfferService.java:592) at org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils.triggerHeartbeat(DataNodeTestUtils.java:72) at org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2289) - locked <0x0007400415e0> (a org.apache.hadoop.hdfs.MiniDFSCluster) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testAddVolumeWithSameStorageUuid(TestFsDatasetImpl.java:242) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) {code} > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, > HDFS-10824.002.patch, HDFS-10824.003.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| >
[jira] [Commented] (HDFS-10713) Throttle FsNameSystem lock warnings
[ https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497148#comment-15497148 ] Arpit Agarwal commented on HDFS-10713: -- Thanks for the updated patch Hanisha. longestReadLockHeldInterval needs its own do-while loop for atomic update (can be before the existing do-while loop). Also the three test functions in FsNameSystem (setTimer, setTimeStampOfLastReadLockReport and setTimeStampOfLastWriteLockReport) can be package private. LGTM otherwise. > Throttle FsNameSystem lock warnings > --- > > Key: HDFS-10713 > URL: https://issues.apache.org/jira/browse/HDFS-10713 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging, namenode >Reporter: Arpit Agarwal >Assignee: Hanisha Koneru > Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, > HDFS-10713.002.patch, HDFS-10713.003.patch, HDFS-10713.004.patch, > HDFS-10713.005.patch, HDFS-10713.006.patch, HDFS-10713.007.patch, > HDFS-10713.008.patch > > > The NameNode logs a message if the FSNamesystem write lock is held by a > thread for over 1 second. These messages can be throttled to at one most one > per x minutes to avoid potentially filling up NN logs. We can also log the > number of suppressed notices since the last log message. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10713) Throttle FsNameSystem lock warnings
[ https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497173#comment-15497173 ] Hadoop QA commented on HDFS-10713: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 194 unchanged - 3 fixed = 195 total (was 197) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSShell | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10713 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828876/HDFS-10713.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c0e04a852f20 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b09a03c | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16774/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/16774/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16774/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16774/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16774/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatic
[jira] [Commented] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497204#comment-15497204 ] Hadoop QA commented on HDFS-9480: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 17s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Self assignment of field StorageTypeStats.capacityNonDfsUsed in new org.apache.hadoop.hdfs.server.blockmanagement.StorageTypeStats(long, long, long, long, int) At StorageTypeStats.java:in new org.apache.hadoop.hdfs.server.blockmanagement.StorageTypeStats(long, long, long, long, int) At StorageTypeStats.java:[line 46] | | Failed junit tests | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.datanode.TestDataNodeLifeline | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-9480 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828875/HDFS-9480.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 6005d47871e3 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b09a03c | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/16775/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16775/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https:/
[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497235#comment-15497235 ] Arpit Agarwal commented on HDFS-10824: -- Thanks for fixing this [~xiaobingo]. The test failure pointed out by [~cnauroth] also repro'd for me. Minor comment - the member storageCap can just be a long[][] to avoid the conversions on lines 1663 and 2303. > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, > HDFS-10824.002.patch, HDFS-10824.003.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497274#comment-15497274 ] Xiaobing Zhou commented on HDFS-10824: -- Thank you [~cnauroth] and [~arpitagarwal] for comments. I will look into the failure. storageCap is intentionally typed as List since startDataNodes is used to start new DNs in existing cluster. The list is to retain old cap setting and append new caps so that they are both remembered for DNs restart. > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, > HDFS-10824.002.patch, HDFS-10824.003.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
[ https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497296#comment-15497296 ] Andrew Wang commented on HDFS-10489: I think we can also get rid of the DFS_ENCRYPTION_KEY_PROVIDER_URI in DFSConfigKeys and HdfsClientConfigKeys, they are both private classes. +1 pending though, nice work here. > Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones > --- > > Key: HDFS-10489 > URL: https://issues.apache.org/jira/browse/HDFS-10489 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.6.4 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HDFS-10489.01.patch, HDFS-10489.02.patch, > HDFS-10489.03.patch, HDFS-10489.04.patch, HDFS-10489.05.patch > > > When working on HADOOP-13155, we > [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117] > and concluded that we should use the common config key for key provider uri. > We can depreate the dfs. key for 3.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10866) Fix Eclipse Java 8 compile errors related to generic parameters.
[ https://issues.apache.org/jira/browse/HDFS-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497313#comment-15497313 ] Konstantin Shvachko commented on HDFS-10866: Thanks for checking on IntelliJ, Akira. {{(Token)}} is a raw type - that was the problem with the original variant. LMK if {{(Token)any()}} works for you. It does for me. > Fix Eclipse Java 8 compile errors related to generic parameters. > > > Key: HDFS-10866 > URL: https://issues.apache.org/jira/browse/HDFS-10866 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko > Attachments: HDFS-10866.01.patch, IntelliJ.png > > > Compilation with Java 8 in Eclipse returns errors, which are related to the > use of generics. This does not effect command line maven builds and is > confirmed to be a [bug in > Eclipse|https://bugs.eclipse.org/bugs/show_bug.cgi?id=497905#c1].The fix is > scheduled only for the next release, so all of us using Eclipse now will have > that error. > Unless we fix it in Hadoop code, which makes sense to me as it appears as a > warning in any case. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10868) Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED
Andrew Wang created HDFS-10868: -- Summary: Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED Key: HDFS-10868 URL: https://issues.apache.org/jira/browse/HDFS-10868 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 3.0.0-alpha1 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Trivial We missed a few stray references to this config key when removing this API, let's clean it up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10868) Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED
[ https://issues.apache.org/jira/browse/HDFS-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10868: --- Attachment: HDFS-10868.001.patch Patch attached, trivial. > Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED > --- > > Key: HDFS-10868 > URL: https://issues.apache.org/jira/browse/HDFS-10868 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Trivial > Attachments: HDFS-10868.001.patch > > > We missed a few stray references to this config key when removing this API, > let's clean it up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10868) Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED
[ https://issues.apache.org/jira/browse/HDFS-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10868: --- Status: Patch Available (was: Open) > Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED > --- > > Key: HDFS-10868 > URL: https://issues.apache.org/jira/browse/HDFS-10868 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Trivial > Attachments: HDFS-10868.001.patch > > > We missed a few stray references to this config key when removing this API, > let's clean it up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9895) Remove unnecessary conf cache from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497335#comment-15497335 ] Hadoop QA commented on HDFS-9895: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 58s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 50s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 276 unchanged - 17 fixed = 277 total (was 293) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 15s{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_101 Failed junit tests | hadoop.hdfs.security.TestDelegationTokenForProxyUser | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Issue | HDFS-9895 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828865/HDFS-9895-branch-2.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 94c9fc92408d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git re
[jira] [Updated] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator
[ https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10823: --- Attachment: HDFS-10823.006.patch One more rev. Fixed the missing break and used the new constants. For the null checks, the rest of the json parsing logic in HttpFSServer doesn't do any for required parameters. So I think this behavior is inline with the rest of the code. For the suppression, I copy-pasted this style from code blocks right above, so would prefer to leave as is. > Implement HttpFSFileSystem#listStatusIterator > - > > Key: HDFS-10823 > URL: https://issues.apache.org/jira/browse/HDFS-10823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.4 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, > HDFS-10823.003.patch, HDFS-10823.004.patch, HDFS-10823.005.patch, > HDFS-10823.006.patch > > > Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS > too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order
[ https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated HDFS-10301: --- Target Version/s: 2.6.6 (was: 2.6.5) Moving this issue to 2.6.6. Please move back if you feel otherwise. > BlockReport retransmissions may lead to storages falsely being declared > zombie if storage report processing happens out of order > > > Key: HDFS-10301 > URL: https://issues.apache.org/jira/browse/HDFS-10301 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.1 >Reporter: Konstantin Shvachko >Assignee: Vinitha Reddy Gankidi >Priority: Critical > Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, > HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, > HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, > HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, > HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, > HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, > HDFS-10301.sample.patch, zombieStorageLogs.rtf > > > When NameNode is busy a DataNode can timeout sending a block report. Then it > sends the block report again. Then NameNode while process these two reports > at the same time can interleave processing storages from different reports. > This screws up the blockReportId field, which makes NameNode think that some > storages are zombie. Replicas from zombie storages are immediately removed, > causing missing blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9500) datanodesSoftwareVersions map may counting wrong when rolling upgrade
[ https://issues.apache.org/jira/browse/HDFS-9500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated HDFS-9500: -- Target Version/s: 2.7.4, 2.6.6 (was: 2.6.5, 2.7.4) Moving this issue to 2.6.6. Please move back if you feel otherwise. > datanodesSoftwareVersions map may counting wrong when rolling upgrade > - > > Key: HDFS-9500 > URL: https://issues.apache.org/jira/browse/HDFS-9500 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1, 2.6.2 >Reporter: Phil Yang >Assignee: Phil Yang > Attachments: 9500-v1.patch > > > While rolling upgrading, namenode's website overview will report there are > two versions datanodes in the cluster, for example, 2.6.0 has x nodes and > 2.6.2 has y nodes. However, sometimes when I stop a datanode in old version > and start a new version one, namenode only increases the number of new > version but not decreases the number of old version. So the total number x+y > will be larger than the number of datanodes. Even all datanodes are upgraded, > there will still have the messages that there are several datanode in old > version. And I must run hdfs dfsadmin -refreshNodes to clear this message. > I think this issue is caused by DatanodeManager.registerDatanode. If nodeS in > old version is not alive because of shutting down, it will not pass > shouldCountVersion, so the number of old version won't be decreased. But this > method only judges the status of heartbeat and isAlive at that moment, if > namenode has not removed this node which will decrease the version map and > this node restarts in the new version, the decrementVersionCount belongs to > this node will never be executed. > So the simplest way to fix this is that we always recounting the version map > in registerDatanode since it is not a heavy operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10495) Block should be marked as missing if the all the replicas are on Decommissioned nodes.
[ https://issues.apache.org/jira/browse/HDFS-10495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497386#comment-15497386 ] Chris Trezzo commented on HDFS-10495: - Moving this issue to 2.6.6. Please move back if you feel otherwise. > Block should be marked as missing if the all the replicas are on > Decommissioned nodes. > -- > > Key: HDFS-10495 > URL: https://issues.apache.org/jira/browse/HDFS-10495 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.8.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > > As discussed on HDFS-8872, we should mark a block as missing if all the > replicas on decommissioned nodes since we can take the decommissioned nodes > out of rotation anytime. > We have seen multiple cases where all the replicas land on decommissioned > nodes. > After HDFS-7933, it doesn't mark as missing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10495) Block should be marked as missing if the all the replicas are on Decommissioned nodes.
[ https://issues.apache.org/jira/browse/HDFS-10495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Trezzo updated HDFS-10495: Target Version/s: 2.8.0, 2.7.4, 2.6.6 (was: 2.8.0, 2.6.5, 2.7.4) > Block should be marked as missing if the all the replicas are on > Decommissioned nodes. > -- > > Key: HDFS-10495 > URL: https://issues.apache.org/jira/browse/HDFS-10495 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.8.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > > As discussed on HDFS-8872, we should mark a block as missing if all the > replicas on decommissioned nodes since we can take the decommissioned nodes > out of rotation anytime. > We have seen multiple cases where all the replicas land on decommissioned > nodes. > After HDFS-7933, it doesn't mark as missing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8870) Lease is leaked on write failure
[ https://issues.apache.org/jira/browse/HDFS-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Trezzo updated HDFS-8870: --- Target Version/s: 2.7.4, 2.6.6 (was: 2.6.5, 2.7.4) Moving this issue to 2.6.6. Please move back if you feel otherwise. > Lease is leaked on write failure > > > Key: HDFS-8870 > URL: https://issues.apache.org/jira/browse/HDFS-8870 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Daryn Sharp > > Creating this ticket on behalf of [~daryn] > We've seen this in our of our cluster. When a long running process has a > write failure, the lease is leaked and gets renewed until the token is > expired. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10857) Rolling upgrade can make data unavailable when the cluster has many failed volumes
[ https://issues.apache.org/jira/browse/HDFS-10857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated HDFS-10857: --- Target Version/s: 2.6.6 (was: 2.6.5) Moving this issue to 2.6.6. Please move back if you feel otherwise. > Rolling upgrade can make data unavailable when the cluster has many failed > volumes > -- > > Key: HDFS-10857 > URL: https://issues.apache.org/jira/browse/HDFS-10857 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.4 >Reporter: Kihwal Lee >Priority: Critical > > When the marker file or trash dir is created or removed during the heartbeat > response processing, an {{IOException}} is thrown if tried on a failed > volume. This stops processing of the rest of storage directories and any > DNA commands that were part of the heartbeat response. > While this is happening, the block token key update does not happen and all > read and write requests start to fail, until the upgrade is finalized and the > DN receives a new key. All it takes is one failed volume. If there are three > such nodes in the cluster, it is very likely that some blocks cannot be read. > The NN has no idea unlike the common missing blocks scenarios, although the > effect is the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
[ https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-10489: - Attachment: HDFS-10489.06.patch Thanks Andrew, good point! As long as DD let the old key work, we really should remove all references as a good example. :) Patch 6 to make this cleaner. > Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones > --- > > Key: HDFS-10489 > URL: https://issues.apache.org/jira/browse/HDFS-10489 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.6.4 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HDFS-10489.01.patch, HDFS-10489.02.patch, > HDFS-10489.03.patch, HDFS-10489.04.patch, HDFS-10489.05.patch, > HDFS-10489.06.patch > > > When working on HADOOP-13155, we > [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117] > and concluded that we should use the common config key for key provider uri. > We can depreate the dfs. key for 3.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator
[ https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497413#comment-15497413 ] Xiao Chen commented on HDFS-10823: -- Makes sense, +1 pending jenkins. Thanks for the great work, Andrew! > Implement HttpFSFileSystem#listStatusIterator > - > > Key: HDFS-10823 > URL: https://issues.apache.org/jira/browse/HDFS-10823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.4 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, > HDFS-10823.003.patch, HDFS-10823.004.patch, HDFS-10823.005.patch, > HDFS-10823.006.patch > > > Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS > too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10866) Fix Eclipse Java 8 compile errors related to generic parameters.
[ https://issues.apache.org/jira/browse/HDFS-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-10866: --- Attachment: HDFS-10866.02.patch > Fix Eclipse Java 8 compile errors related to generic parameters. > > > Key: HDFS-10866 > URL: https://issues.apache.org/jira/browse/HDFS-10866 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko > Attachments: HDFS-10866.01.patch, HDFS-10866.02.patch, IntelliJ.png > > > Compilation with Java 8 in Eclipse returns errors, which are related to the > use of generics. This does not effect command line maven builds and is > confirmed to be a [bug in > Eclipse|https://bugs.eclipse.org/bugs/show_bug.cgi?id=497905#c1].The fix is > scheduled only for the next release, so all of us using Eclipse now will have > that error. > Unless we fix it in Hadoop code, which makes sense to me as it appears as a > warning in any case. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9500) datanodesSoftwareVersions map may counting wrong when rolling upgrade
[ https://issues.apache.org/jira/browse/HDFS-9500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HDFS-9500: --- Target Version/s: 2.7.4 (was: 2.7.4, 2.6.6) > datanodesSoftwareVersions map may counting wrong when rolling upgrade > - > > Key: HDFS-9500 > URL: https://issues.apache.org/jira/browse/HDFS-9500 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1, 2.6.2 >Reporter: Phil Yang >Assignee: Phil Yang > Attachments: 9500-v1.patch > > > While rolling upgrading, namenode's website overview will report there are > two versions datanodes in the cluster, for example, 2.6.0 has x nodes and > 2.6.2 has y nodes. However, sometimes when I stop a datanode in old version > and start a new version one, namenode only increases the number of new > version but not decreases the number of old version. So the total number x+y > will be larger than the number of datanodes. Even all datanodes are upgraded, > there will still have the messages that there are several datanode in old > version. And I must run hdfs dfsadmin -refreshNodes to clear this message. > I think this issue is caused by DatanodeManager.registerDatanode. If nodeS in > old version is not alive because of shutting down, it will not pass > shouldCountVersion, so the number of old version won't be decreased. But this > method only judges the status of heartbeat and isAlive at that moment, if > namenode has not removed this node which will decrease the version map and > this node restarts in the new version, the decrementVersionCount belongs to > this node will never be executed. > So the simplest way to fix this is that we always recounting the version map > in registerDatanode since it is not a heavy operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10777) DataNode should report&remove volume failures if DU cannot access files
[ https://issues.apache.org/jira/browse/HDFS-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497479#comment-15497479 ] Wei-Chiu Chuang commented on HDFS-10777: I see. Thanks [~ajisakaa]. Didn't realize disk can behave that way. In that case, let's close this jira as invalid. > DataNode should report&remove volume failures if DU cannot access files > --- > > Key: HDFS-10777 > URL: https://issues.apache.org/jira/browse/HDFS-10777 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-10777.01.patch > > > HADOOP-12973 refactored DU and makes it pluggable. The refactory has a > side-effect that if DU encounters an exception, the exception is caught, > logged and ignored, essentially fixes HDFS-9908 (in which case runaway > exceptions prevent DataNodes from handshaking with NameNodes). > However, this "fix" is not good, in the sense that if the disk is bad, there > is no immediate action made by the DataNode other than logging the exception. > Existing {{FsDatasetSpi#checkDataDir}} has been reduced to only check a few > number of directories blindly. If a disk goes bad, it is often possible that > only a few files are bad initially and that by checking only a small number > of directories it is easy to overlook the degraded disk. > I propose: in addition to logging the exception, DataNode should proactively > verify the files are not accessible, remove the volume, and make the failure > visible by showing it in JMX, so that administrators can spot the failure via > monitoring systems. > A different fix, based on HDFS-9908, is needed before Hadoop 2.8.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-10777) DataNode should report&remove volume failures if DU cannot access files
[ https://issues.apache.org/jira/browse/HDFS-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-10777. Resolution: Invalid Close this jira as invalid and I'll file an improvement jira to add logging or metric when DataNode disks become flaky. > DataNode should report&remove volume failures if DU cannot access files > --- > > Key: HDFS-10777 > URL: https://issues.apache.org/jira/browse/HDFS-10777 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-10777.01.patch > > > HADOOP-12973 refactored DU and makes it pluggable. The refactory has a > side-effect that if DU encounters an exception, the exception is caught, > logged and ignored, essentially fixes HDFS-9908 (in which case runaway > exceptions prevent DataNodes from handshaking with NameNodes). > However, this "fix" is not good, in the sense that if the disk is bad, there > is no immediate action made by the DataNode other than logging the exception. > Existing {{FsDatasetSpi#checkDataDir}} has been reduced to only check a few > number of directories blindly. If a disk goes bad, it is often possible that > only a few files are bad initially and that by checking only a small number > of directories it is easy to overlook the degraded disk. > I propose: in addition to logging the exception, DataNode should proactively > verify the files are not accessible, remove the volume, and make the failure > visible by showing it in JMX, so that administrators can spot the failure via > monitoring systems. > A different fix, based on HDFS-9908, is needed before Hadoop 2.8.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10713) Throttle FsNameSystem lock warnings
[ https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-10713: -- Attachment: HDFS-10713.009.patch Thanks Arpit. I have addressed the latest comments in v9 patch. > Throttle FsNameSystem lock warnings > --- > > Key: HDFS-10713 > URL: https://issues.apache.org/jira/browse/HDFS-10713 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging, namenode >Reporter: Arpit Agarwal >Assignee: Hanisha Koneru > Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, > HDFS-10713.002.patch, HDFS-10713.003.patch, HDFS-10713.004.patch, > HDFS-10713.005.patch, HDFS-10713.006.patch, HDFS-10713.007.patch, > HDFS-10713.008.patch, HDFS-10713.009.patch > > > The NameNode logs a message if the FSNamesystem write lock is held by a > thread for over 1 second. These messages can be throttled to at one most one > per x minutes to avoid potentially filling up NN logs. We can also log the > number of suppressed notices since the last log message. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator
[ https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497525#comment-15497525 ] Hadoop QA commented on HDFS-10823: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 35s{color} | {color:orange} root: The patch generated 7 new + 909 unchanged - 1 fixed = 916 total (was 910) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 29s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 28s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.TestDNS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10823 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828902/HDFS-10823.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 27225a2280f1 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b09a03c | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16777/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16777/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results |
[jira] [Commented] (HDFS-10868) Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED
[ https://issues.apache.org/jira/browse/HDFS-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497577#comment-15497577 ] Hadoop QA commented on HDFS-10868: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 87m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.TestDFSShell | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10868 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828898/HDFS-10868.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 82cad20d30c9 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b09a03c | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16776/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16776/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
[jira] [Updated] (HDFS-10713) Throttle FsNameSystem lock warnings
[ https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-10713: -- Attachment: HDFS-10713.010.patch > Throttle FsNameSystem lock warnings > --- > > Key: HDFS-10713 > URL: https://issues.apache.org/jira/browse/HDFS-10713 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging, namenode >Reporter: Arpit Agarwal >Assignee: Hanisha Koneru > Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, > HDFS-10713.002.patch, HDFS-10713.003.patch, HDFS-10713.004.patch, > HDFS-10713.005.patch, HDFS-10713.006.patch, HDFS-10713.007.patch, > HDFS-10713.008.patch, HDFS-10713.009.patch, HDFS-10713.010.patch > > > The NameNode logs a message if the FSNamesystem write lock is held by a > thread for over 1 second. These messages can be throttled to at one most one > per x minutes to avoid potentially filling up NN logs. We can also log the > number of suppressed notices since the last log message. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator
[ https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497604#comment-15497604 ] Andrew Wang commented on HDFS-10823: Test failure looks unrelated, already tracked by HADOOP-13101. Will commit shortly. > Implement HttpFSFileSystem#listStatusIterator > - > > Key: HDFS-10823 > URL: https://issues.apache.org/jira/browse/HDFS-10823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.4 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, > HDFS-10823.003.patch, HDFS-10823.004.patch, HDFS-10823.005.patch, > HDFS-10823.006.patch > > > Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS > too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10868) Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED
[ https://issues.apache.org/jira/browse/HDFS-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497607#comment-15497607 ] Andrew Wang commented on HDFS-10868: Both test failures related to port binding, looks unrelated. > Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED > --- > > Key: HDFS-10868 > URL: https://issues.apache.org/jira/browse/HDFS-10868 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Trivial > Attachments: HDFS-10868.001.patch > > > We missed a few stray references to this config key when removing this API, > let's clean it up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.
[ https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497608#comment-15497608 ] Lei (Eddy) Xu commented on HDFS-10637: -- HI, [~virajith] Thanks for the patch. * Please add ASF license to {{FsVolumeImplBuilder}} * {{TestNameNodePrunesMissingStorages.java}}: {code} StorageLocation volumeDirectoryToRemove = null; {code} Change to {{volumeLocationToRemove}}? {code} FileUtil.fullyDelete( 237 new File(volumeDirectoryToRemove.getFile().toString())); {code} {{getFile()}} already returns a {{File}}. you dont need to create a new one. * In {{FsVolumeImpl}}. can you remove {{this.currentDir}}? * {{FsDatasetSpi#getFileInFinalizedDir()}}, it still returns a File. Is there a way to eliminate it? * {{FsVolumeSpi#containsPath}} is only used by test? I feel that it is still assuming a file based {{FsVolume}}. It is a large patch. Please allow more comments to come later. Thanks. > Modifications to remove the assumption that FsVolumes are backed by > java.io.File. > - > > Key: HDFS-10637 > URL: https://issues.apache.org/jira/browse/HDFS-10637 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, fs >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-10637.001.patch, HDFS-10637.002.patch, > HDFS-10637.003.patch, HDFS-10637.004.patch, HDFS-10637.005.patch, > HDFS-10637.006.patch, HDFS-10637.007.patch, HDFS-10637.008.patch > > > Modifications to {{FsVolumeSpi}} and {{FsVolumeImpl}} to remove references to > {{java.io.File}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator
[ https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10823: --- Resolution: Fixed Fix Version/s: 3.0.0-alpha2 2.9.0 Status: Resolved (was: Patch Available) Committed to trunk and branch-2. Thank you Xiao for the prompt and detailed reviews! > Implement HttpFSFileSystem#listStatusIterator > - > > Key: HDFS-10823 > URL: https://issues.apache.org/jira/browse/HDFS-10823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.4 >Reporter: Andrew Wang >Assignee: Andrew Wang > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, > HDFS-10823.003.patch, HDFS-10823.004.patch, HDFS-10823.005.patch, > HDFS-10823.006.patch > > > Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS > too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order
[ https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497634#comment-15497634 ] Arpit Agarwal commented on HDFS-10301: -- [~shv], I am referring to this delta. This workaround just bypasses the leaseID check. {code} if (node.leaseId == 0) { - LOG.warn("BR lease 0x{} is not valid for DN {}, because the DN " + + LOG.warn("BR lease 0x{} is not found for DN {}, because the DN " + "is not in the pending set.", Long.toHexString(id), dn.getDatanodeUuid()); - return false; + return true; } {code} > BlockReport retransmissions may lead to storages falsely being declared > zombie if storage report processing happens out of order > > > Key: HDFS-10301 > URL: https://issues.apache.org/jira/browse/HDFS-10301 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.1 >Reporter: Konstantin Shvachko >Assignee: Vinitha Reddy Gankidi >Priority: Critical > Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, > HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, > HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, > HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, > HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, > HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, > HDFS-10301.sample.patch, zombieStorageLogs.rtf > > > When NameNode is busy a DataNode can timeout sending a block report. Then it > sends the block report again. Then NameNode while process these two reports > at the same time can interleave processing storages from different reports. > This screws up the blockReportId field, which makes NameNode think that some > storages are zombie. Replicas from zombie storages are immediately removed, > causing missing blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10868) Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED
[ https://issues.apache.org/jira/browse/HDFS-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497661#comment-15497661 ] Xiao Chen commented on HDFS-10868: -- Had an offline chat with Andrew. +1 pending removal of reference in the test. Thanks for the work, Andrew! > Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED > --- > > Key: HDFS-10868 > URL: https://issues.apache.org/jira/browse/HDFS-10868 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Trivial > Attachments: HDFS-10868.001.patch > > > We missed a few stray references to this config key when removing this API, > let's clean it up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10868) Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED
[ https://issues.apache.org/jira/browse/HDFS-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10868: --- Attachment: HDFS-10868.002.patch Yep good catch, one more rev. Thanks Xiao for reviewing! > Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED > --- > > Key: HDFS-10868 > URL: https://issues.apache.org/jira/browse/HDFS-10868 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Trivial > Attachments: HDFS-10868.001.patch, HDFS-10868.002.patch > > > We missed a few stray references to this config key when removing this API, > let's clean it up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator
[ https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497667#comment-15497667 ] Hudson commented on HDFS-10823: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10450 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10450/]) HDFS-10823. Implement HttpFSFileSystem#listStatusIterator. (wang: rev 8a40953058d50d421d62b71067a13b626b3cba1f) * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java > Implement HttpFSFileSystem#listStatusIterator > - > > Key: HDFS-10823 > URL: https://issues.apache.org/jira/browse/HDFS-10823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.4 >Reporter: Andrew Wang >Assignee: Andrew Wang > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, > HDFS-10823.003.patch, HDFS-10823.004.patch, HDFS-10823.005.patch, > HDFS-10823.006.patch > > > Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS > too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10866) Fix Eclipse Java 8 compile errors related to generic parameters.
[ https://issues.apache.org/jira/browse/HDFS-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497700#comment-15497700 ] Hadoop QA commented on HDFS-10866: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 26s{color} | {color:orange} root: The patch generated 1 new + 96 unchanged - 2 fixed = 97 total (was 98) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 58s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 31s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}106m 25s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10866 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828909/HDFS-10866.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 75cdeafcd3ef 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0e68e14 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16779/artifact/patchprocess/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16779/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16779/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. >
[jira] [Commented] (HDFS-10713) Throttle FsNameSystem lock warnings
[ https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497753#comment-15497753 ] Hadoop QA commented on HDFS-10713: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 194 unchanged - 3 fixed = 196 total (was 197) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 78m 23s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10713 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828915/HDFS-10713.009.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 071d92e9c5a1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f6f3a44 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16780/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/16780/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16780/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16780/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Throttle FsNameSystem lock warnings > --- > > Key: HDFS-10713 > URL: https://issues.apache.org/jira/browse/HDFS-107
[jira] [Commented] (HDFS-10713) Throttle FsNameSystem lock warnings
[ https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497756#comment-15497756 ] Hadoop QA commented on HDFS-10713: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 194 unchanged - 3 fixed = 195 total (was 197) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 5s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 81m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10713 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828918/HDFS-10713.010.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 709212a55191 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f6f3a44 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16781/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/16781/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16781/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16781/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16781/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org |
[jira] [Commented] (HDFS-10868) Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED
[ https://issues.apache.org/jira/browse/HDFS-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15497827#comment-15497827 ] Hadoop QA commented on HDFS-10868: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 27s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10868 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828921/HDFS-10868.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 983f693e45a1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8a40953 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16782/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16782/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED > --- > > Key: HDFS
[jira] [Updated] (HDFS-7343) HDFS smart storage management
[ https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HDFS-7343: Assignee: Wei Zhou (was: Kai Zheng) Description: As discussed in HDFS-7285, it would be better to have a comprehensive and flexible storage policy engine considering file attributes, metadata, data temperature, storage type, EC codec, available hardware capabilities, user/application preference and etc. Modified the title for re-purpose. We'd extend this effort some bit and aim to work on a comprehensive solution to provide smart storage management service in order for convenient, intelligent and effective utilizing of erasure coding or replicas, HDFS cache facility, HSM offering, and all kinds of tools (balancer, mover, disk balancer and so on) in a large cluster. was: As discussed in HDFS-7285, it would be better to have a comprehensive and flexible storage policy engine considering file attributes, metadata, data temperature, storage type, EC codec, available hardware capabilities, user/application preference and etc. Component/s: (was: namenode) Summary: HDFS smart storage management (was: A comprehensive and flexible storage policy engine) Modified the title for re-purpose. We'd extend this effort some bit and aim to work on a comprehensive solution to provide smart storage management service in order for convenient, intelligent and effective utilizing of erasure coding or replicas, HDFS cache facility, HSM offering, and all kinds of tools (balancer, mover, disk balancer and so on) in a large cluster. Doing this as a standalone service to avoid big impact to existing NNs was inspired by [~jingzhao] quite some time ago, along with many other valuable insights. [~zhouwei] will work on this and let's wait another week for the delayed design. > HDFS smart storage management > - > > Key: HDFS-7343 > URL: https://issues.apache.org/jira/browse/HDFS-7343 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kai Zheng >Assignee: Wei Zhou > > As discussed in HDFS-7285, it would be better to have a comprehensive and > flexible storage policy engine considering file attributes, metadata, data > temperature, storage type, EC codec, available hardware capabilities, > user/application preference and etc. > Modified the title for re-purpose. > We'd extend this effort some bit and aim to work on a comprehensive solution > to provide smart storage management service in order for convenient, > intelligent and effective utilizing of erasure coding or replicas, HDFS cache > facility, HSM offering, and all kinds of tools (balancer, mover, disk > balancer and so on) in a large cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-9480: --- Attachment: HDFS-9480-002.patch Fixed the findbug issue.. > Expose nonDfsUsed via StorageTypeStats > > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480-002.patch, HDFS-9480.patch > > > Expose nonDfsUsed via StorageTypeStats..See the comment [here | > https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761] > from arpit. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats
[ https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15498071#comment-15498071 ] Hadoop QA commented on HDFS-9480: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 59s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-9480 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828955/HDFS-9480-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b7b3ad8c0309 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 501a778 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16784/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16784/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Expose nonDfsUsed via StorageTypeStats > > > Key: HDFS-9480 > URL: https://issues.apache.org/jira/browse/HDFS-9480 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-9480-002.patch, HDFS-9480.patch > > > Expose nonDfsUsed via StorageTypeStats..See the comment [here | > https://issues.apache.org/j
[jira] [Commented] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
[ https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15498104#comment-15498104 ] Hadoop QA commented on HDFS-10489: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 39s{color} | {color:green} root: The patch generated 0 new + 808 unchanged - 6 fixed = 808 total (was 814) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 31s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 11s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 40s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 50s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10489 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12828904/HDFS-10489.06.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit find
[jira] [Commented] (HDFS-10713) Throttle FsNameSystem lock warnings
[ https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15498130#comment-15498130 ] Arpit Agarwal commented on HDFS-10713: -- v10 patch lgtm. I think this block was left in by mistake in the second do-while loop in readUnlock, it can be removed. {code} localLongestReadLock = longestReadLockHeldInterval.get(); if (readLockInterval > localLongestReadLock) { longestReadLockHeldInterval.compareAndSet( localLongestReadLock, readLockInterval); } {code} Will hold off committing until next week in case Chris, Erik or others have additional comments. > Throttle FsNameSystem lock warnings > --- > > Key: HDFS-10713 > URL: https://issues.apache.org/jira/browse/HDFS-10713 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging, namenode >Reporter: Arpit Agarwal >Assignee: Hanisha Koneru > Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, > HDFS-10713.002.patch, HDFS-10713.003.patch, HDFS-10713.004.patch, > HDFS-10713.005.patch, HDFS-10713.006.patch, HDFS-10713.007.patch, > HDFS-10713.008.patch, HDFS-10713.009.patch, HDFS-10713.010.patch > > > The NameNode logs a message if the FSNamesystem write lock is held by a > thread for over 1 second. These messages can be throttled to at one most one > per x minutes to avoid potentially filling up NN logs. We can also log the > number of suppressed notices since the last log message. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
Fwd: How to clear block count alert on hdfs
Hi hadoop experts, We are getting block count alerts on datanodes. Please find the DFS admin report Configured Capacity: 58418139463680 (53.13 TB) Present Capacity: 55931103011017 (50.87 TB) DFS Remaining: 55237802565632 (50.24 TB) DFS Used: 693300445385 (645.69 GB) DFS Used%: 1.24% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 *NON DFS USED : 2.26 TB* Another, root voulme is 80% Utilized on all datanodes. Kindly please suggest how to clear the block count alert.
[jira] [Commented] (HDFS-10638) Modifications to remove the assumption that StorageLocation is associated with java.io.File.
[ https://issues.apache.org/jira/browse/HDFS-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15498193#comment-15498193 ] Lei (Eddy) Xu commented on HDFS-10638: -- Hi, [~virajith] {code} try { File file = new File(uri.toString()); String absPath = file.getAbsolutePath(); uri = new URI("file", uri.getAuthority(), absPath, uri.getQuery(), uri.getFragment()); } catch (URISyntaxException e) { e.printStackTrace(); } {code} It should not swallow the exception. It can throw {{IOE}}. > Modifications to remove the assumption that StorageLocation is associated > with java.io.File. > > > Key: HDFS-10638 > URL: https://issues.apache.org/jira/browse/HDFS-10638 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, fs >Reporter: Virajith Jalaparti > Attachments: HDFS-10638.001.patch, HDFS-10638.002.patch, > HDFS-10638.003.patch > > > Changes to ensure that {{StorageLocation}} need not be associated with a > {{java.io.File}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org