[jira] [Updated] (HDFS-11679) Ozone: SCM CLI: Implement list container command
[ https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu updated HDFS-11679: -- Attachment: HDFS-11679-HDFS-7240.001.patch After discussing it with Weiwei offline, attach v1 patch for this JIRA. > Ozone: SCM CLI: Implement list container command > > > Key: HDFS-11679 > URL: https://issues.apache.org/jira/browse/HDFS-11679 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Labels: command-line > Attachments: HDFS-11679-HDFS-7240.001.patch > > > Implement the command to list containers > {code} > hdfs scm -container list -start [-count <100> | -end > ]{code} > Lists all containers known to SCM. The option -start allows the listing to > start from a specified container and -count controls the number of entries > returned but it is mutually exclusive with the -end option which returns keys > from the -start to -end range. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11679) Ozone: SCM CLI: Implement list container command
[ https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu updated HDFS-11679: -- Status: Patch Available (was: Open) > Ozone: SCM CLI: Implement list container command > > > Key: HDFS-11679 > URL: https://issues.apache.org/jira/browse/HDFS-11679 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Labels: command-line > Attachments: HDFS-11679-HDFS-7240.001.patch > > > Implement the command to list containers > {code} > hdfs scm -container list -start [-count <100> | -end > ]{code} > Lists all containers known to SCM. The option -start allows the listing to > start from a specified container and -count controls the number of entries > returned but it is mutually exclusive with the -end option which returns keys > from the -start to -end range. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11754) Make FsServerDefaults cache configurable.
[ https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Erofeev updated HDFS-11754: --- Attachment: HDFS-11754.004.patch > Make FsServerDefaults cache configurable. > - > > Key: HDFS-11754 > URL: https://issues.apache.org/jira/browse/HDFS-11754 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Mikhail Erofeev >Priority: Minor > Labels: newbie > Fix For: 2.9.0 > > Attachments: HDFS-11754.001.patch, HDFS-11754.002.patch, > HDFS-11754.003.patch, HDFS-11754.004.patch > > > DFSClient caches the result of FsServerDefaults for 60 minutes. > But the 60 minutes time is not configurable. > Continuing the discussion from HDFS-11702, it would be nice if we can make > this configurable and make the default as 60 minutes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11790) Decommissioning of a DataNode after MaintenanceState takes a very long time to complete
[ https://issues.apache.org/jira/browse/HDFS-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022321#comment-16022321 ] Ming Ma commented on HDFS-11790: Thanks [~manojg] for reporting this. Hmm, the existing code should take care of this. Wonder if it is due to some corner cases where the following functions don't skip maintenance nodes properly. * BlockManager#createLocatedBlock should skip IN_MAINTENANCE nodes. * BlockManager#chooseSourceDatanodes should skip MAINTENANCE_NOT_FOR_READ nodes set for IN_MAINTENANCE nodes. > Decommissioning of a DataNode after MaintenanceState takes a very long time > to complete > --- > > Key: HDFS-11790 > URL: https://issues.apache.org/jira/browse/HDFS-11790 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-11790-test.01.patch > > > *Problem:* > When a DataNode is requested for Decommissioning after it successfully > transitioned to MaintenanceState (HDFS-7877), the decommissioning state > transition is stuck for a long time even for very small number of blocks in > the cluster. > *Details:* > * A DataNode DN1 wa requested for MaintenanceState and it successfully > transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there > are sufficient replication for all its blocks. > * As DN1 was in maintenance state now, the DataNode process was stopped on > DN1. Later the same DN1 was requested for Decommissioning. > * As part of Decommissioning, all the blocks residing in DN1 were requested > for re-replicated to other DataNodes, so that DN1 could transition from > ENTERING_DECOMMISSION to DECOMMISSIONED. > * But, re-replication for few blocks was stuck for a long time. Eventually it > got completed. > * Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as > a source datanode for re-replication of few of the blocks. Since DataNode > process on DN1 was already stopped, the re-replication was stuck for a long > time. > * Eventually PendingReplicationMonitor timed out, and those re-replication > were re-scheduled for those timed out blocks. Again, during the > re-replication also, the IN_MAINT DN1 was chose as a source datanode for few > of the blocks leading to timeout again. This iteration continued for few > times until all blocks get re-replicated. > * By design, IN_MAINT datandoes should not be chosen for any read or write > operations. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11778) Ozone: KSM: add getBucketInfo
[ https://issues.apache.org/jira/browse/HDFS-11778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022320#comment-16022320 ] Nandakumar commented on HDFS-11778: --- Thanks for the review [~xyao]. bq. I think we can just keep using the BucketInfo as the parameter for CreateBucketRequest. The add/remove ACL is mainly for setBucket. BucketInfo is good enough for CreateBucket/GetBuckert. I was also thinking of removing KsmBucketArgs, and use KsmBucketInfo for CreateBucketRequest. Initially KsmBucketArgs was added to make it consistent with {{web.handlers.BucketArgs}} bq. bucketInfo has an aclList which will usually not needed to coexist with addAcls and removeAcls. Any reason for changing the current KsmBucketArgs? Yes, true. Since KsmBucketArgs was used for createBucket, bucketInfo was added; that can be removed since we are going to directly use KsmBucketInfo for create call. I will rework and upload a new patch soon. > Ozone: KSM: add getBucketInfo > - > > Key: HDFS-11778 > URL: https://issues.apache.org/jira/browse/HDFS-11778 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Nandakumar > Attachments: HDFS-11778-HDFS-7240.000.patch, > HDFS-11778-HDFS-7240.001.patch > > > Returns the bucket information if the bucket exists. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11874) [SPS]: Document the SPS feature
Uma Maheswara Rao G created HDFS-11874: -- Summary: [SPS]: Document the SPS feature Key: HDFS-11874 URL: https://issues.apache.org/jira/browse/HDFS-11874 Project: Hadoop HDFS Issue Type: Sub-task Components: documentation Reporter: Uma Maheswara Rao G This JIRA is for tracking the documentation about the feature -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11682) TestBalancer#testBalancerWithStripedFile is flaky
[ https://issues.apache.org/jira/browse/HDFS-11682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022300#comment-16022300 ] Hadoop QA commented on HDFS-11682: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 201 unchanged - 4 fixed = 202 total (was 205) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11682 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869563/HDFS-11682.00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 78df8459bbab 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 52661e0 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19572/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19572/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19572/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19572/console | | Powered b
[jira] [Commented] (HDFS-11794) Add ec sub command -listCodec to show currently supported ec codecs
[ https://issues.apache.org/jira/browse/HDFS-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022255#comment-16022255 ] SammiChen commented on HDFS-11794: -- Thanks [~rakeshr] for review the patch. Thanks [~drankye] for the discussion. > Add ec sub command -listCodec to show currently supported ec codecs > --- > > Key: HDFS-11794 > URL: https://issues.apache.org/jira/browse/HDFS-11794 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11794.001.patch, HDFS-11794.002.patch, > HDFS-11794.003.patch > > > Add ec sub command -listCodec to show currently supported ec codecs -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11846) Ozone: Potential http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022250#comment-16022250 ] Hadoop QA commented on HDFS-11846: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 11s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.cblock.TestCBlockServer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11846 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869559/HDFS-11846-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f3ead62b546f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 9f7b8a1 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/19571/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19571/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19571/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19571/console | | Powered by | Apache
[jira] [Commented] (HDFS-11873) Ozone: Object store handler cannot serve requests from same http client
[ https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022232#comment-16022232 ] Weiwei Yang commented on HDFS-11873: Attached a test case to reproduce this issue. There are two tests in the class # {{testReuseHttpConnection}} creates a http client and uses this client to submit 2 create volume requests, on current code base, 1st request succeed and *2nd request is stuck* (eventually times out). # {{testNewConnectionPerRequest}} instead creating a new http client to submit 2 create volume requests, on current code base, they all succeed. [~xyao] and [~anu], please take a look at this JIRA and the test case I attached. I am not familiar with the netty stuff but based on my investigation, I think the problem is on the server side. Please take a look and let me know if I miss anything. Thank you. > Ozone: Object store handler cannot serve requests from same http client > --- > > Key: HDFS-11873 > URL: https://issues.apache.org/jira/browse/HDFS-11873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Critical > Attachments: HDFS-11873-HDFS-7240.testcase.patch > > > This issue was found when I worked on HDFS-11846. Instead of creating a new > http client instance per request, I tried to reuse {{CloseableHttpClient}} in > {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, > every second request from the http client hangs, which could not get > dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something > wrong in the netty pipeline, this jira aims to 1) fix the problem in the > server side 2) use the pool for client http clients to reduce the resource > overhead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11873) Ozone: Object store handler cannot serve requests from same http client
[ https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11873: --- Attachment: HDFS-11873-HDFS-7240.testcase.patch > Ozone: Object store handler cannot serve requests from same http client > --- > > Key: HDFS-11873 > URL: https://issues.apache.org/jira/browse/HDFS-11873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Critical > Attachments: HDFS-11873-HDFS-7240.testcase.patch > > > This issue was found when I worked on HDFS-11846. Instead of creating a new > http client instance per request, I tried to reuse {{CloseableHttpClient}} in > {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, > every second request from the http client hangs, which could not get > dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something > wrong in the netty pipeline, this jira aims to 1) fix the problem in the > server side 2) use the pool for client http clients to reduce the resource > overhead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022218#comment-16022218 ] Brahma Reddy Battula commented on HDFS-11864: - Oh,Yes.. will cherry-pick to {{branch-2.8}}..thanks for reminding. > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11865) Ozone: Do not initialize Ratis cluster during datanode startup
[ https://issues.apache.org/jira/browse/HDFS-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022216#comment-16022216 ] Hadoop QA commented on HDFS-11865: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 59s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 19s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 36s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}132m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.cblock.TestCBlockServerPersistence | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS
[jira] [Comment Edited] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022205#comment-16022205 ] Yiqun Lin edited comment on HDFS-11864 at 5/24/17 2:12 AM: --- Thanks [~brahmareddy] for the review and commit. Why not merge this into branch-2.8 and add 2.8.2 into fix version? The versions 2.8.x will be fixed this issue after merging into branch-2.8. BTW, the patch can also apply cleanly into branch-2.8. was (Author: linyiqun): Thanks [~brahmareddy] for the review and commit. Why not merge this into branch-2.8 and add 2.8.2 into fix version? The versions 2.8.x will be fixed this issue after merging into branch-2.8. > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022205#comment-16022205 ] Yiqun Lin commented on HDFS-11864: -- Thanks [~brahmareddy] for the review and commit. Why not merge this into branch-2.8 and add 2.8.2 into fix version? The versions 2.8.x will be fixed this issue after merging into branch-2.8. > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11383) String duplication in org.apache.hadoop.fs.BlockLocation
[ https://issues.apache.org/jira/browse/HDFS-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HDFS-11383: -- Attachment: hs2-crash-2.txt > String duplication in org.apache.hadoop.fs.BlockLocation > > > Key: HDFS-11383 > URL: https://issues.apache.org/jira/browse/HDFS-11383 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Attachments: HDFS-11383.01.patch, hs2-crash-2.txt > > > I am working on Hive performance, investigating the problem of high memory > pressure when (a) a table consists of a high number (thousands) of partitions > and (b) multiple queries run against it concurrently. It turns out that a lot > of memory is wasted due to data duplication. One source of duplicate strings > is class org.apache.hadoop.fs.BlockLocation. Its fields such as storageIds, > topologyPaths, hosts, names, may collectively use up to 6% of memory in my > benchmark, causing (together with other problematic classes) a huge memory > spike. Of these 6% of memory taken by BlockLocation strings, more than 5% are > wasted due to duplication. > I think we need to add calls to String.intern() in the BlockLocation > constructor, like: > {code} > this.hosts = internStringsInArray(hosts); > ... > private void internStringsInArray(String[] sar) { > for (int i = 0; i < sar.length; i++) { > sar[i] = sar[i].intern(); > } > } > {code} > String.intern() performs very well starting from JDK 7. I've found some > articles explaining the progress that was made by the HotSpot JVM developers > in this area, verified that with benchmarks myself, and finally added quite a > bit of interning to one of the Cloudera products without any issues. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11383) String duplication in org.apache.hadoop.fs.BlockLocation
[ https://issues.apache.org/jira/browse/HDFS-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022204#comment-16022204 ] Misha Dmitriev commented on HDFS-11383: --- Hi Andrew, I understand your concerns. Unit tests could be a good solution, but the problem is, to quantify the effect of a change like that one would need, in principle, to first run some code that uses BlockLocation unchanged and measure how much memory is consumed, then run the same code with BlockLocation that has interning and measure memory again. There is also a problem of how representative such a "pseudo-benchmark" would be, e.g. I can easily populate some data structure with very big strings and then demonstrate that interning them would save a lot of memory. But would that resemble real-life usage patterns? So I suspect that some benchmark would be best, but indeed it's hard to revive my test cluster right now. Maybe I can still convince you by: - telling that String.intern() is proven to work well (I've already optimized several projects at Cloudera with its help, and there I could definitely quantify the effect of the changes - we can discuss all this offline if you would like) - attaching the results from my old benchmark showing how much memory is wasted due to duplicate strings in BlockLocation. I am attaching the full jxray report for one of the heap dumps that I obtained in this benchmark, and here are the most relevant excerpts: {code} 6. DUPLICATE STRINGS Total strings: 172,451 Unique strings: 52,360 Duplicate values: 16,158 Overhead: 14,291K (29.8%) Top duplicate strings: Ovhd Num char[]s Num objs Value 1,398K (2.9%)12791 12791 "host-10-17-101-14.coe.cloudera.com" 1,163K (2.4%) 99269926 "host-10-17-101-14.coe.cloudera.com:8020" 809K (1.7%)6 6 "hdfs://host-10-17-101-14.coe.cloudera.com:8020/tmp/misha/misha-table-partition-1,hdf ...[length 82892]" 465K (1.0%) 99239923 "hdfs" 7. REFERENCE CHAINS FOR DUPLICATE STRINGS 595K (1.2%), 5088 dup strings (4 unique), 5088 dup backing arrays: 1696 of "DS-aab6ab0b-0b11-489f-b209-ab2c6412934c", 1149 of "DS-d47bdaca-50c5-4475-ac08-7f07e10cd0b6", 1132 of "DS-bf6046e6-d5e9-4ac2-a1af-ff8a88ab9d85", of "DS-d2c5088c-bd69-4500-b981-502819c1307a" <-- String[] <-- org.apache.hadoop.fs.BlockLocation.storageIds <-- org.apache.hadoop.fs.BlockLocation[] <-- org.apache.hadoop.fs.LocatedFileStatus.locations <-- {j.u.ArrayList} <-- Java Local@fd414328 (j.u.ArrayList) 556K (1.2%), 5088 dup strings (4 unique), 5088 dup backing arrays: 1696 of "host-10-17-101-14.coe.cloudera.com", 1149 of "host-10-17-101-15.coe.cloudera.com", 1132 of "host-10-17-101-17.coe.cloudera.com", of "host-10-17-101-16.coe.cloudera.com" <-- String[] <-- org.apache.hadoop.fs.BlockLocation.hosts <-- org.apache.hadoop.fs.BlockLocation[] <-- org.apache.hadoop.fs.LocatedFileStatus.locations <-- {j.u.ArrayList} <-- Java Local@fd414328 (j.u.ArrayList) 476K (1.0%), 5088 dup strings (4 unique), 5088 dup backing arrays: 1696 of "/default/10.17.101.14:50010", 1149 of "/default/10.17.101.15:50010", 1132 of "/default/10.17.101.17:50010", of "/default/10.17.101.16:50010" <-- String[] <-- org.apache.hadoop.fs.BlockLocation.topologyPaths <-- org.apache.hadoop.fs.BlockLocation[] <-- org.apache.hadoop.fs.LocatedFileStatus.locations <-- {j.u.ArrayList} <-- Java Local@fd414328 (j.u.ArrayList) 409K (0.9%), 3492 dup strings (4 unique), 3492 dup backing arrays: 1164 of "DS-aab6ab0b-0b11-489f-b209-ab2c6412934c", 788 of "DS-d47bdaca-50c5-4475-ac08-7f07e10cd0b6", 770 of "DS-bf6046e6-d5e9-4ac2-a1af-ff8a88ab9d85", 770 of "DS-d2c5088c-bd69-4500-b981-502819c1307a" <-- String[] <-- org.apache.hadoop.fs.BlockLocation.storageIds <-- org.apache.hadoop.fs.BlockLocation[] <-- org.apache.hadoop.fs.LocatedFileStatus.locations <-- {j.u.ArrayList} <-- Java Local@fd67ae70 (j.u.ArrayList) 397K (0.8%), 5088 dup strings (4 unique), 5088 dup backing arrays: 1696 of "10.17.101.14:50010", 1149 of "10.17.101.15:50010", 1132 of "10.17.101.17:50010", of "10.17.101.16:50010" <-- String[] <-- org.apache.hadoop.fs.BlockLocation.names <-- org.apache.hadoop.fs.BlockLocation[] <-- org.apache.hadoop.fs.LocatedFileStatus.locations <-- {j.u.ArrayList} <-- Java Local@fd414328 (j.u.ArrayList) 381K (0.8%), 3492 dup strings (4 unique), 3492 dup backing arrays: 1164 of "host-10-17-101-14.coe.cloudera.com", 788 of "host-10-17-101-15.coe.cloudera.com", 770 of "host-10-17-101-17.coe.cloudera.com", 770 of "host-10-17-101-16.coe.cloudera.com" <-- String[] <-- org.apache.hadoop.fs.BlockLocation.hosts <-- org.apache.hadoop.fs.BlockLocation[] <-- org.apache.hadoop.fs.LocatedFileStatus.locations <-- {j.u.ArrayList} <-- Java Local@fd67ae70 (j.u.ArrayList) {code} > String
[jira] [Commented] (HDFS-11837) Backport HDFS-9710 to branch-2.7: Change DN to send block receipt IBRs in batches
[ https://issues.apache.org/jira/browse/HDFS-11837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022199#comment-16022199 ] Vinitha Reddy Gankidi commented on HDFS-11837: -- [~shv] ReplaceDatanodeOnFailure is used here in TestBatchIbr: conf.setBoolean(ReplaceDatanodeOnFailure.BEST_EFFORT_KEY, true); Is there something I'm missing? > Backport HDFS-9710 to branch-2.7: Change DN to send block receipt IBRs in > batches > - > > Key: HDFS-11837 > URL: https://issues.apache.org/jira/browse/HDFS-11837 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Vinitha Reddy Gankidi >Assignee: Vinitha Reddy Gankidi > Attachments: HDFS-9710-branch-2.7.00.patch > > > As per discussussion in [mailling > list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser] > backport HDFS-9710 to branch-2.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11846) Ozone: Potential http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022171#comment-16022171 ] Weiwei Yang edited comment on HDFS-11846 at 5/24/17 1:51 AM: - Thanks [~xyao] for reviewing this. I uploaded v2 patch which has following changes bq. OzoneClientUtils.java: Do we need a configure for HTTP Connection Timeout? Added another property {{OZONE_CLIENT_CONNECTION_TIMEOUT_MS}} for the connection timeout. bq. OzoneBucket.java: we should ensure the outPutStream is closed properly in the final section as well. I have fixed this line, and I have reviewed all streams in these classes and make sure they are closed in final statement as well. bq. OzoneClient.java: unnecessary change. Fixed bq. OzoneClient.java: httppost-> httpPost? I have fixed all such names in {{OzoneClient}}, {{OzoneVolume}} and {{OzoneBucket}} to make sure they all be consistent. bq. OzoneClient.java: I think we should use PoolingHttpClientConnectionManager instead of creating a new connection for each request. I tried to use PoolingHttpClientConnectionManager but I found it doesn't work in current code base. The problem was on the netty server side. It doesn't serve a reused http connection well, I have created HDFS-11873 for that. Thank you. was (Author: cheersyang): Thanks [~xyao] for reviewing this. I uploaded v2 patch which has following changes bq. OzoneClientUtils.java: Do we need a configure for HTTP Connection Timeout? Added another property {{OZONE_CLIENT_CONNECTION_TIMEOUT_MS}} for the connection timeout. bq. OzoneBucket.java: we should ensure the outPutStream is closed properly in the final section as well. I have fixed this line, and I have reviewed all streams in these classes and make sure they are closed in final statement as well. bq. OzoneClient.java: unnecessary change. Fixed bq. OzoneClient.java: httppost-> httpPost? I have fixed all such names in {{OzoneClient}}, {{OzoneVolume}} and {{OzoneBucket}} to make sure they all be consistent. bq. OzoneClient.java: I think we should use PoolingHttpClientConnectionManager instead of creating a new connection for each request. I tried to use PoolingHttpClientConnectionManager but I found it doesn't work in current code base. The problem was on the netty server side. It doesn't serve a reused http connection well, I think we need a new JIRA to fix that. Thank you. > Ozone: Potential http connection leaks in ozone clients > --- > > Key: HDFS-11846 > URL: https://issues.apache.org/jira/browse/HDFS-11846 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11846-HDFS-7240.001.patch, > HDFS-11846-HDFS-7240.002.patch > > > There are several problems > # Http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are > created per request, per [Reuse of HttpClient > instance|http://hc.apache.org/httpclient-3.x/performance.html#Reuse_of_HttpClient_instance] > doc, proposed to reuse the http client instance to reduce the over head. > # Some resources in these classes were not properly cleaned up. E.g the http > connection, HttpGet/HttpPost requests. > > This jira's purpose is to fix these issues and investigate how we can improve > the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11873) Ozone: Object store handler cannot serve requests from same http client
Weiwei Yang created HDFS-11873: -- Summary: Ozone: Object store handler cannot serve requests from same http client Key: HDFS-11873 URL: https://issues.apache.org/jira/browse/HDFS-11873 Project: Hadoop HDFS Issue Type: Sub-task Components: HDFS-7240 Reporter: Weiwei Yang Assignee: Weiwei Yang Priority: Critical This issue was found when I worked on HDFS-11846. Instead of creating a new http client instance per request, I tried to reuse {{CloseableHttpClient}} in {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, every second request from the http client hangs, which could not get dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something wrong in the netty pipeline, this jira aims to 1) fix the problem in the server side 2) use the pool for client http clients to reduce the resource overhead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11682) TestBalancer#testBalancerWithStripedFile is flaky
[ https://issues.apache.org/jira/browse/HDFS-11682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-11682: - Status: Patch Available (was: Open) > TestBalancer#testBalancerWithStripedFile is flaky > - > > Key: HDFS-11682 > URL: https://issues.apache.org/jira/browse/HDFS-11682 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Attachments: HDFS-11682.00.patch, IndexOutOfBoundsException.log, > timeout.log > > > Saw this fail in two different ways on a precommit run, but pass locally. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11682) TestBalancer#testBalancerWithStripedFile is flaky
[ https://issues.apache.org/jira/browse/HDFS-11682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-11682: - Attachment: HDFS-11682.00.patch Added retry logic to {{TestBalancer#waitForBalancer}}. > TestBalancer#testBalancerWithStripedFile is flaky > - > > Key: HDFS-11682 > URL: https://issues.apache.org/jira/browse/HDFS-11682 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Attachments: HDFS-11682.00.patch, IndexOutOfBoundsException.log, > timeout.log > > > Saw this fail in two different ways on a precommit run, but pass locally. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11682) TestBalancer#testBalancerWithStripedFile is flaky
[ https://issues.apache.org/jira/browse/HDFS-11682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022194#comment-16022194 ] Lei (Eddy) Xu commented on HDFS-11682: -- In {{TestBalancer#testBalancerWithStripedFile}}, it creates a file with 72 data blocks (20 * 12 * 3 / 10). Under RS(6, 3) coding, it is 72 / 6 * (6 + 3) = 108 blocks. And after {code} // add datanodes in new rack String newRack = "/rack" + (++numOfRacks); cluster.startDataNodes(conf, 2, true, null, new String[]{newRack, newRack}, null, new long[]{capacity, capacity}); {code} There are 14 DataNodes before running {{Balancer}}. With some additional debug log, the log shows {code} 17-05-23 18:17:23,186 [Thread-0] INFO balancer.Balancer (Balancer.java:init(380)) - Above avg: 127.0.0.1:60027:DISK, util=50.00, avg=40.00, diff=10.00, threshold=10.00 {code} So this DataNode: {{127.0.0.1:60027}} is not chose to be the source for balancing, because {{50.0 - 40.0 <= 10.0}}. But the actual average utilization is {{108 / 14 * 20}} = 38.57. Thus in {{TestBalancer#waitForBalancer}}, it will fail because {{50.0 - 38.57 > 10.0}}. This is due to that NN has not receive a new block report to reflect the moved blocks. > TestBalancer#testBalancerWithStripedFile is flaky > - > > Key: HDFS-11682 > URL: https://issues.apache.org/jira/browse/HDFS-11682 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Attachments: IndexOutOfBoundsException.log, timeout.log > > > Saw this fail in two different ways on a precommit run, but pass locally. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11726) [SPS] : StoragePolicySatisfier should not select same storage type as source and destination in same datanode.
[ https://issues.apache.org/jira/browse/HDFS-11726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022190#comment-16022190 ] Surendra Singh Lilhore commented on HDFS-11726: --- Thanks [~umamaheswararao] and [~rakeshr] for looking in this issue. I saw this logs in failed test case. I need to reproduce it. I will try it and update here. > [SPS] : StoragePolicySatisfier should not select same storage type as source > and destination in same datanode. > -- > > Key: HDFS-11726 > URL: https://issues.apache.org/jira/browse/HDFS-11726 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > > {code} > 2017-04-30 16:12:28,569 [BlockMoverTask-0] INFO > datanode.StoragePolicySatisfyWorker (Worker.java:moveBlock(248)) - Start > moving block:blk_1073741826_1002 from src:127.0.0.1:41699 to > destin:127.0.0.1:41699 to satisfy storageType, sourceStoragetype:ARCHIVE and > destinStoragetype:ARCHIVE > {code} > {code} > 2017-04-30 16:12:28,571 [DataXceiver for client /127.0.0.1:36428 [Replacing > block BP-1409501412-127.0.1.1-1493548923222:blk_1073741826_1002 from > 6c7aa66e-a778-43d5-89f6-053d5f6b35bc]] INFO datanode.DataNode > (DataXceiver.java:replaceBlock(1202)) - opReplaceBlock > BP-1409501412-127.0.1.1-1493548923222:blk_1073741826_1002 received exception > org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Replica > FinalizedReplica, blk_1073741826_1002, FINALIZED > getNumBytes() = 1024 > getBytesOnDisk() = 1024 > getVisibleLength()= 1024 > getVolume() = > /home/sachin/software/hadoop/HDFS-10285/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data7 > getBlockURI() = > file:/home/sachin/software/hadoop/HDFS-10285/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data7/current/BP-1409501412-127.0.1.1-1493548923222/current/finalized/subdir0/subdir0/blk_1073741826 > already exists on storage ARCHIVE > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11695) [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.
[ https://issues.apache.org/jira/browse/HDFS-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022187#comment-16022187 ] Surendra Singh Lilhore commented on HDFS-11695: --- Thanks [~umamaheswararao] for review and commit. Thanks [~rakeshr] and [~yuanbo] for review > [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log. > > > Key: HDFS-11695 > URL: https://issues.apache.org/jira/browse/HDFS-11695 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Blocker > Fix For: HDFS-10285 > > Attachments: fsimage.xml, HDFS-11695-HDFS-10285.001.patch, > HDFS-11695-HDFS-10285.002.patch, HDFS-11695-HDFS-10285.003.patch, > HDFS-11695-HDFS-10285.004.patch, HDFS-11695-HDFS-10285.005.patch > > > {noformat} > 2017-04-23 13:27:51,971 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.io.IOException: Cannot request to call satisfy storage policy on path > /ssl, as this file/dir was already called for satisfying storage policy. > at > org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSatisfyStoragePolicy(FSDirAttrOp.java:511) > at > org.apache.hadoop.hdfs.server.namenode.FSDirXAttrOp.unprotectedSetXAttrs(FSDirXAttrOp.java:284) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:918) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:241) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:150) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11871) balance include Parameter Usage Error
[ https://issues.apache.org/jira/browse/HDFS-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang resolved HDFS-11871. Resolution: Not A Problem > balance include Parameter Usage Error > - > > Key: HDFS-11871 > URL: https://issues.apache.org/jira/browse/HDFS-11871 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.3 >Reporter: kevy liu >Assignee: Weiwei Yang >Priority: Trivial > > [hadoop@bigdata-hdp-apache505 hadoop-2.7.2]$ bin/hdfs balancer -h > Usage: hdfs balancer > [-policy ] the balancing policy: datanode or blockpool > [-threshold ]Percentage of disk capacity > [-exclude [-f | ]] > Excludes the specified datanodes. > [-include [-f | ]] > Includes only the specified datanodes. > [-idleiterations ] Number of consecutive idle > iterations (-1 for Infinite) before exit. > Parameter Description: > -f | > The parse separator in the code is: > String[] nodes = line.split("[ \t\n\f\r]+"); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11846) Ozone: Potential http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022171#comment-16022171 ] Weiwei Yang commented on HDFS-11846: Thanks [~xyao] for reviewing this. I uploaded v2 patch which has following changes bq. OzoneClientUtils.java: Do we need a configure for HTTP Connection Timeout? Added another property {{OZONE_CLIENT_CONNECTION_TIMEOUT_MS}} for the connection timeout. bq. OzoneBucket.java: we should ensure the outPutStream is closed properly in the final section as well. I have fixed this line, and I have reviewed all streams in these classes and make sure they are closed in final statement as well. bq. OzoneClient.java: unnecessary change. Fixed bq. OzoneClient.java: httppost-> httpPost? I have fixed all such names in {{OzoneClient}}, {{OzoneVolume}} and {{OzoneBucket}} to make sure they all be consistent. bq. OzoneClient.java: I think we should use PoolingHttpClientConnectionManager instead of creating a new connection for each request. I tried to use PoolingHttpClientConnectionManager but I found it doesn't work in current code base. The problem was on the netty server side. It doesn't serve a reused http connection well, I think we need a new JIRA to fix that. Thank you. > Ozone: Potential http connection leaks in ozone clients > --- > > Key: HDFS-11846 > URL: https://issues.apache.org/jira/browse/HDFS-11846 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11846-HDFS-7240.001.patch, > HDFS-11846-HDFS-7240.002.patch > > > There are several problems > # Http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are > created per request, per [Reuse of HttpClient > instance|http://hc.apache.org/httpclient-3.x/performance.html#Reuse_of_HttpClient_instance] > doc, proposed to reuse the http client instance to reduce the over head. > # Some resources in these classes were not properly cleaned up. E.g the http > connection, HttpGet/HttpPost requests. > > This jira's purpose is to fix these issues and investigate how we can improve > the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11846) Ozone: Potential http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11846: --- Attachment: HDFS-11846-HDFS-7240.002.patch > Ozone: Potential http connection leaks in ozone clients > --- > > Key: HDFS-11846 > URL: https://issues.apache.org/jira/browse/HDFS-11846 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11846-HDFS-7240.001.patch, > HDFS-11846-HDFS-7240.002.patch > > > There are several problems > # Http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are > created per request, per [Reuse of HttpClient > instance|http://hc.apache.org/httpclient-3.x/performance.html#Reuse_of_HttpClient_instance] > doc, proposed to reuse the http client instance to reduce the over head. > # Some resources in these classes were not properly cleaned up. E.g the http > connection, HttpGet/HttpPost requests. > > This jira's purpose is to fix these issues and investigate how we can improve > the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11597) Ozone: Add Ratis management API
[ https://issues.apache.org/jira/browse/HDFS-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-11597: --- Description: We need APIs to manage Ratis clusters for the following operations: - create cluster; - close cluster; - get members; and - update members. was: We need an API to manage raft clusters, e.g. - RaftClusterId createRaftCluster(MembershipConfiguration) - void closeRaftCluster(RaftClusterId) - MembershipConfiguration getMembers(RaftClusterId) - void changeMembership(RaftClusterId, newMembershipConfiguration) > Ozone: Add Ratis management API > --- > > Key: HDFS-11597 > URL: https://issues.apache.org/jira/browse/HDFS-11597 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-11597-HDFS-7240.20170522.patch, > HDFS-11597-HDFS-7240.20170523.patch > > > We need APIs to manage Ratis clusters for the following operations: > - create cluster; > - close cluster; > - get members; and > - update members. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11597) Ozone: Add Ratis management API
[ https://issues.apache.org/jira/browse/HDFS-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-11597: --- Attachment: HDFS-11597-HDFS-7240.20170523.patch HDFS-11597-HDFS-7240.20170523.patch: implements getDatanodes and updateDatanodes; also adds new tests. > Ozone: Add Ratis management API > --- > > Key: HDFS-11597 > URL: https://issues.apache.org/jira/browse/HDFS-11597 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-11597-HDFS-7240.20170522.patch, > HDFS-11597-HDFS-7240.20170523.patch > > > We need an API to manage raft clusters, e.g. > - RaftClusterId createRaftCluster(MembershipConfiguration) > - void closeRaftCluster(RaftClusterId) > - MembershipConfiguration getMembers(RaftClusterId) > - void changeMembership(RaftClusterId, newMembershipConfiguration) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11837) Backport HDFS-9710 to branch-2.7: Change DN to send block receipt IBRs in batches
[ https://issues.apache.org/jira/browse/HDFS-11837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022153#comment-16022153 ] Konstantin Shvachko commented on HDFS-11837: Minor nit for your patch, that you can remove unused import ReplaceDatanodeOnFailure in TestBatchIbr. Otherwise +1. Will commit in a bit. > Backport HDFS-9710 to branch-2.7: Change DN to send block receipt IBRs in > batches > - > > Key: HDFS-11837 > URL: https://issues.apache.org/jira/browse/HDFS-11837 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Vinitha Reddy Gankidi >Assignee: Vinitha Reddy Gankidi > Attachments: HDFS-9710-branch-2.7.00.patch > > > As per discussussion in [mailling > list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser] > backport HDFS-9710 to branch-2.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11446) TestMaintenanceState#testWithNNAndDNRestart fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-11446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022136#comment-16022136 ] Hadoop QA commented on HDFS-11446: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s{color} | {color:red} HDFS-11446 does not apply to branch-2. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11446 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12856421/HDFS-11446-branch-2.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19570/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestMaintenanceState#testWithNNAndDNRestart fails intermittently > > > Key: HDFS-11446 > URL: https://issues.apache.org/jira/browse/HDFS-11446 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11446.001.patch, HDFS-11446.002.patch, > HDFS-11446.003.patch, HDFS-11446-branch-2.patch > > > The test {{TestMaintenanceState#testWithNNAndDNRestart}} fails in trunk. The > stack info( > https://builds.apache.org/job/PreCommit-HDFS-Build/18423/testReport/ ): > {code} > java.lang.AssertionError: expected null, but was: for block BP-1367163238-172.17.0.2-1487836532907:blk_1073741825_1001: > expected 3, got 2 > ,DatanodeInfoWithStorage[127.0.0.1:42649,DS-c499e6ef-ce14-428b-baef-8cf2a122b248,DISK],DatanodeInfoWithStorage[127.0.0.1:40774,DS-cc484c09-6e32-4804-a337-2871f37b62e1,DISK],pending > block # 1 ,under replicated # 0 ,> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotNull(Assert.java:664) > at org.junit.Assert.assertNull(Assert.java:646) > at org.junit.Assert.assertNull(Assert.java:656) > at > org.apache.hadoop.hdfs.TestMaintenanceState.testWithNNAndDNRestart(TestMaintenanceState.java:731) > {code} > The failure seems due to pending block has not been replicated. We can bump > the retry times since sometimes the cluster would be busy. Also we can use > {{GenericTestUtils#waitFor}} to simplified the current compared logic. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11446) TestMaintenanceState#testWithNNAndDNRestart fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-11446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022130#comment-16022130 ] Hadoop QA commented on HDFS-11446: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s{color} | {color:red} HDFS-11446 does not apply to branch-2. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11446 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12856421/HDFS-11446-branch-2.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19569/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestMaintenanceState#testWithNNAndDNRestart fails intermittently > > > Key: HDFS-11446 > URL: https://issues.apache.org/jira/browse/HDFS-11446 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11446.001.patch, HDFS-11446.002.patch, > HDFS-11446.003.patch, HDFS-11446-branch-2.patch > > > The test {{TestMaintenanceState#testWithNNAndDNRestart}} fails in trunk. The > stack info( > https://builds.apache.org/job/PreCommit-HDFS-Build/18423/testReport/ ): > {code} > java.lang.AssertionError: expected null, but was: for block BP-1367163238-172.17.0.2-1487836532907:blk_1073741825_1001: > expected 3, got 2 > ,DatanodeInfoWithStorage[127.0.0.1:42649,DS-c499e6ef-ce14-428b-baef-8cf2a122b248,DISK],DatanodeInfoWithStorage[127.0.0.1:40774,DS-cc484c09-6e32-4804-a337-2871f37b62e1,DISK],pending > block # 1 ,under replicated # 0 ,> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotNull(Assert.java:664) > at org.junit.Assert.assertNull(Assert.java:646) > at org.junit.Assert.assertNull(Assert.java:656) > at > org.apache.hadoop.hdfs.TestMaintenanceState.testWithNNAndDNRestart(TestMaintenanceState.java:731) > {code} > The failure seems due to pending block has not been replicated. We can bump > the retry times since sometimes the cluster would be busy. Also we can use > {{GenericTestUtils#waitFor}} to simplified the current compared logic. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11872) Ozone : implement StorageContainerManager#getStorageContainerLocations
Chen Liang created HDFS-11872: - Summary: Ozone : implement StorageContainerManager#getStorageContainerLocations Key: HDFS-11872 URL: https://issues.apache.org/jira/browse/HDFS-11872 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Chen Liang Assignee: Chen Liang We should implement {{StorageContainerManager#getStorageContainerLocations}} . Although the comment says it will be moved to KSM, the functionality of container lookup by name it should actually be part of SCM functionality. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11778) Ozone: KSM: add getBucketInfo
[ https://issues.apache.org/jira/browse/HDFS-11778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022078#comment-16022078 ] Xiaoyu Yao commented on HDFS-11778: --- Thanks [~nandakumar131] for working on this. The patch looks good to me overall. Here are some minor issues: DistributedStorageHandler.java Line 271-274: Currently the KSM#getBucketInfo returns KsmBucketInfo. But it does not contains any key/spaceUsage information of the bucket. We will need to revisit this for the keyCount and spaceUsed once putKey is added. I'm OK with this as-is now. KeySpaceManagerProto.proto Line 192: I think we can just keep using the BucketInfo as the parameter for CreateBucketRequest. The add/remove ACL is mainly for setBucket. BucketInfo is good enough for CreateBucket/GetBuckert. KeySpaceManagerProtocolClientSideTranslatorPB.java Line 269: Same as above. Line 311: the IOException message needs to be updated. KsmBucketArgs.java Line 37: bucketInfo has an aclList which will usually not needed to coexist with addAcls and removeAcls. Any reason for changing the current KsmBucketArgs? My understanding is BucketInfo/KsmBucketInfo are used for createBucket's input parameter and getBucket's output. BucketArgs/KsmBucketArgs are used solely for setBucket input parameter. MetadataManagerImpl.java Line 103: NIT: the code below is explanatory, the extra comment can be removed. > Ozone: KSM: add getBucketInfo > - > > Key: HDFS-11778 > URL: https://issues.apache.org/jira/browse/HDFS-11778 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Nandakumar > Attachments: HDFS-11778-HDFS-7240.000.patch, > HDFS-11778-HDFS-7240.001.patch > > > Returns the bucket information if the bucket exists. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11865) Ozone: Do not initialize Ratis cluster during datanode startup
[ https://issues.apache.org/jira/browse/HDFS-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-11865: --- Attachment: HDFS-11865-HDFS-7240.20170523.patch HDFS-11865-HDFS-7240.20170523.patch: fixes checkstyle warnings. > Ozone: Do not initialize Ratis cluster during datanode startup > -- > > Key: HDFS-11865 > URL: https://issues.apache.org/jira/browse/HDFS-11865 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-11865-HDFS-7240.20170522.patch, > HDFS-11865-HDFS-7240.20170523.patch > > > During a datanode startup, we current pass dfs.container.ratis.conf so that > the datanode is bound to a particular Ratis cluster. > In this JIRA, we change Datanode that the datanode is no longer bound to any > Ratis cluster during startup. We use the Ratis reinitialize request > (RATIS-86) to set up a Ratis cluster later on. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-2538) option to disable fsck dots
[ https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021982#comment-16021982 ] Konstantin Shvachko commented on HDFS-2538: --- Hey guys, I think if we reverse the default to print with dots, this is not incompatible. So let's keep it for now. Let me check with [~cwsteinbach] if he remembers the context behind HDFS-7175 before final decision. > option to disable fsck dots > > > Key: HDFS-2538 > URL: https://issues.apache.org/jira/browse/HDFS-2538 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.2.0 >Reporter: Allen Wittenauer >Assignee: Mohammad Kamrul Islam >Priority: Minor > Labels: newbie, release-blocker > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-2538.1.patch, HDFS-2538.2.patch, HDFS-2538.3.patch, > HDFS-2538-branch-0.20-security-204.patch, > HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, > HDFS-2538-branch-2.7.patch > > > this patch turns the dots during fsck off by default and provides an option > to turn them back on if you have a fetish for millions and millions of dots > on your terminal. i haven't done any benchmarks, but i suspect fsck is now > 300% faster to boot. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11655) Ozone: CLI: Guarantees user runs SCM commands has appropriate permission
[ https://issues.apache.org/jira/browse/HDFS-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021947#comment-16021947 ] Xiaoyu Yao commented on HDFS-11655: --- Thanks [~cheersyang] for reporting the issue and posting the fix. The permission check in the patch is done at the RPC layer. Note these RPC methods maybe invoked from other components such as KSM, CBlock server, etc. We may not run all these components using the same super user. If we really want to enforce this at RPC layer, we should have a whitelist instead of a single super user . If we enforce this only at the SCM Admin CLI, it should be fine to have a single super user though. > Ozone: CLI: Guarantees user runs SCM commands has appropriate permission > > > Key: HDFS-11655 > URL: https://issues.apache.org/jira/browse/HDFS-11655 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: command-line, security > Attachments: HDFS-11655-HDFS-7240.001.patch, > HDFS-11655-HDFS-7240.002.patch > > > We need to add a permission check module for ozone command line utilities, to > make sure users run commands with proper privileges. For now, commands in > [design doc| > https://issues.apache.org/jira/secure/attachment/12861478/storage-container-manager-cli-v002.pdf] > all require admin privilege. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11846) Ozone: Potential http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021929#comment-16021929 ] Xiaoyu Yao commented on HDFS-11846: --- Thanks [~cheersyang] for reporting the issue and posting the fix. The patch looks good to me overall. Just a few minor issues below. 1. OzoneClientUtils.java Line 543-544: I see we are adding OZONE_CLIENT_SOCKET_TIMEOUT_MS for HTTP request execution. Do we need a configure for HTTP Connection Timeout? 2. OzoneBucket.java Line 325 we should ensure the outPutStream is closed properly in the final section as well. {code} ByteArrayOutputStream outPutStream = new ByteArrayOutputStream(); {code} 3. OzoneClient.java NIT: line 63-64: unnecessary change. NIT: line 144: httppost-> httpPost? similar naming for other variables like: httpget/httpdelete etc. for contistency. Line 619: As commented in TODO, I think we should use PoolingHttpClientConnectionManager instead of creating a new connection for each request. Please open a ticket for adding OzoneClientUtils.getHttpClient with PoolingHttpClientConnectionManager if you don't plan to fix that with this JIRA. > Ozone: Potential http connection leaks in ozone clients > --- > > Key: HDFS-11846 > URL: https://issues.apache.org/jira/browse/HDFS-11846 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11846-HDFS-7240.001.patch > > > There are several problems > # Http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are > created per request, per [Reuse of HttpClient > instance|http://hc.apache.org/httpclient-3.x/performance.html#Reuse_of_HttpClient_instance] > doc, proposed to reuse the http client instance to reduce the over head. > # Some resources in these classes were not properly cleaned up. E.g the http > connection, HttpGet/HttpPost requests. > > This jira's purpose is to fix these issues and investigate how we can improve > the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11695) [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.
[ https://issues.apache.org/jira/browse/HDFS-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020653#comment-16020653 ] Uma Maheswara Rao G edited comment on HDFS-11695 at 5/23/17 9:13 PM: - Good work Surendra. I have just pushed this to branch. was (Author: umamaheswararao): Good work Surendra. I have just pushed this to trunk. > [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log. > > > Key: HDFS-11695 > URL: https://issues.apache.org/jira/browse/HDFS-11695 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Blocker > Fix For: HDFS-10285 > > Attachments: fsimage.xml, HDFS-11695-HDFS-10285.001.patch, > HDFS-11695-HDFS-10285.002.patch, HDFS-11695-HDFS-10285.003.patch, > HDFS-11695-HDFS-10285.004.patch, HDFS-11695-HDFS-10285.005.patch > > > {noformat} > 2017-04-23 13:27:51,971 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.io.IOException: Cannot request to call satisfy storage policy on path > /ssl, as this file/dir was already called for satisfying storage policy. > at > org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSatisfyStoragePolicy(FSDirAttrOp.java:511) > at > org.apache.hadoop.hdfs.server.namenode.FSDirXAttrOp.unprotectedSetXAttrs(FSDirXAttrOp.java:284) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:918) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:241) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:150) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-5042) Completed files lost after power failure
[ https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021837#comment-16021837 ] Hadoop QA commented on HDFS-5042: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 22s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} root: The patch generated 0 new + 307 unchanged - 1 fixed = 307 total (was 308) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 41s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}144m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-5042 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869501/HDFS-5042-03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a63fbf0bd5e5 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 52661e0 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/19565/artifact/patchprocess/branch-
[jira] [Commented] (HDFS-11754) Make FsServerDefaults cache configurable.
[ https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021788#comment-16021788 ] Hadoop QA commented on HDFS-11754: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 18 new + 146 unchanged - 0 fixed = 164 total (was 146) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | | | hadoop.hdfs.TestEncryptionZones | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11754 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869504/HDFS-11754.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 6f858529a1a4 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 52661e0 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19566/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt | | unit | https://builds.apache.org/job/P
[jira] [Commented] (HDFS-10785) libhdfs++: Implement the rest of the tools
[ https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021760#comment-16021760 ] Hadoop QA commented on HDFS-10785: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 3m 57s{color} | {color:red} Docker failed to build yetus/hadoop:78fc6b6. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-10785 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869515/HDFS-10785.HDFS-8707.008.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19567/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > libhdfs++: Implement the rest of the tools > -- > > Key: HDFS-10785 > URL: https://issues.apache.org/jira/browse/HDFS-10785 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-10785.HDFS-8707.000.patch, > HDFS-10785.HDFS-8707.001.patch, HDFS-10785.HDFS-8707.002.patch, > HDFS-10785.HDFS-8707.003.patch, HDFS-10785.HDFS-8707.004.patch, > HDFS-10785.HDFS-8707.005.patch, HDFS-10785.HDFS-8707.006.patch, > HDFS-10785.HDFS-8707.007.patch, HDFS-10785.HDFS-8707.008.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11599) distcp interrupt does not kill hadoop job
[ https://issues.apache.org/jira/browse/HDFS-11599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-11599. - Resolution: Not A Problem Yes. The program running on the command line is just a client after job launch. To kill the program actually doing the work, you'll need to use the yarn or mapred commands. Closing as "Not a problem" > distcp interrupt does not kill hadoop job > - > > Key: HDFS-11599 > URL: https://issues.apache.org/jira/browse/HDFS-11599 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: David Fagnan > > keyboard interrupt for example leaves the hadoop job & copy still running, is > this intended behavior? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10785) libhdfs++: Implement the rest of the tools
[ https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-10785: - Attachment: HDFS-10785.HDFS-8707.008.patch Another attempt for this patch since yetus failed. > libhdfs++: Implement the rest of the tools > -- > > Key: HDFS-10785 > URL: https://issues.apache.org/jira/browse/HDFS-10785 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-10785.HDFS-8707.000.patch, > HDFS-10785.HDFS-8707.001.patch, HDFS-10785.HDFS-8707.002.patch, > HDFS-10785.HDFS-8707.003.patch, HDFS-10785.HDFS-8707.004.patch, > HDFS-10785.HDFS-8707.005.patch, HDFS-10785.HDFS-8707.006.patch, > HDFS-10785.HDFS-8707.007.patch, HDFS-10785.HDFS-8707.008.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11472) Fix inconsistent replica size after a data pipeline failure
[ https://issues.apache.org/jira/browse/HDFS-11472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021679#comment-16021679 ] Erik Krogen commented on HDFS-11472: Hey [~jojochuang], trying to make sure I understand this patch. IIUC we basically want to address the potential discrepancy between what is actually on disk and the in-memory idea of what is on disk ({{getBytesOnDisk()}}). The changes to {{FsDatasetImpl#recoverRbwImpl()}} seem reasonable and the test seems good. I'm less sure of the change to {{FsDatasetImpl#initReplicaRecoveryImpl()}}. If the actual number of bytes on disk is less than {{getVisibleLength()}}, we should throw an error, right? Currently this may not be the case if we only WARN about {{getBytesOnDisk() < getVisibleLength()}}. It seems in that case we should then check {{getBlockDataLength() < getVisibleLength()}}. > Fix inconsistent replica size after a data pipeline failure > --- > > Key: HDFS-11472 > URL: https://issues.apache.org/jira/browse/HDFS-11472 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Critical > Labels: release-blocker > Attachments: HDFS-11472.001.patch, HDFS-11472.testcase.patch > > > We observed a case where a replica's on disk length is less than acknowledged > length, breaking the assumption in recovery code. > {noformat} > 2017-01-08 01:41:03,532 WARN > org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to > obtain replica info for block > (=BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394519586) from > datanode (=DatanodeInfoWithStorage[10.204.138.17:1004,null,null]) > java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: getBytesOnDisk() < > getVisibleLength(), rip=ReplicaBeingWritten, blk_2526438952_1101394519586, RBW > getNumBytes() = 27530 > getBytesOnDisk() = 27006 > getVisibleLength()= 27268 > getVolume() = /data/6/hdfs/datanode/current > getBlockFile()= > /data/6/hdfs/datanode/current/BP-947993742-10.204.0.136-1362248978912/current/rbw/blk_2526438952 > bytesAcked=27268 > bytesOnDisk=27006 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2284) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2260) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2566) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:2577) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2645) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:245) > at > org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2551) > at java.lang.Thread.run(Thread.java:745) > {noformat} > It turns out that if an exception is thrown within > {{BlockReceiver#receivePacket}}, the in-memory replica on disk length may not > be updated, but the data is written to disk anyway. > For example, here's one exception we observed > {noformat} > 2017-01-08 01:40:59,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Exception for > BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394499067 > java.nio.channels.ClosedByInterruptException > at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) > at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1484) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:994) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:670) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:857) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:797) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244) > at java.lang.Thread.run(Thread.java:745) > {noformat} > There are potentially other places and causes where an exception is thrown > within {{BlockReceiver#receivePacket}}, so it may not make much sense to > alleviate it for this particular exception. Instead, we s
[jira] [Commented] (HDFS-11661) GetContentSummary uses excessive amounts of memory
[ https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021680#comment-16021680 ] Wei-Chiu Chuang commented on HDFS-11661: Heads up I am going to revert both commits EOD. I believe I've got sufficient number of upvotes. > GetContentSummary uses excessive amounts of memory > -- > > Key: HDFS-11661 > URL: https://issues.apache.org/jira/browse/HDFS-11661 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Nathan Roberts >Assignee: Wei-Chiu Chuang >Priority: Blocker > Attachments: HDFS-11661.001.patch, HDFs-11661.002.patch, Heap > growth.png > > > ContentSummaryComputationContext::nodeIncluded() is being used to keep track > of all INodes visited during the current content summary calculation. This > can be all of the INodes in the filesystem, making for a VERY large hash > table. This simply won't work on large filesystems. > We noticed this after upgrading a namenode with ~100Million filesystem > objects was spending significantly more time in GC. Fortunately this system > had some memory breathing room, other clusters we have will not run with this > additional demand on memory. > This was added as part of HDFS-10797 as a way of keeping track of INodes that > have already been accounted for - to avoid double counting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11851) getGlobalJNIEnv() may deadlock if exception is thrown
[ https://issues.apache.org/jira/browse/HDFS-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021669#comment-16021669 ] Sailesh Mukil commented on HDFS-11851: -- [~jzhuge] Apologies for the slow response. It seems non-trivial to add tests for this fix. One way to add it would be to add some functions only for the sake of testing to expose these mutexes to the test files and try to recursively lock and unlock. What do you think? > getGlobalJNIEnv() may deadlock if exception is thrown > - > > Key: HDFS-11851 > URL: https://issues.apache.org/jira/browse/HDFS-11851 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs >Affects Versions: 3.0.0-alpha3 >Reporter: Henry Robinson >Assignee: Sailesh Mukil >Priority: Blocker > Attachments: HDFS-11851.000.patch, HDFS-11851.001.patch > > > HDFS-11529 introduced a deadlock into {{getGlobalJNIEnv()}} if an exception > is thrown. {{getGlobalJNIEnv()}} holds {{jvmMutex}}, but > {{printExceptionAndFree()}} will eventually try to acquire that lock in > {{setTLSExceptionStrings()}}. > The exception might get caught from {{loadFileSystems}}: > {code} > jthr = invokeMethod(env, NULL, STATIC, NULL, > "org/apache/hadoop/fs/FileSystem", > "loadFileSystems", "()V"); > if (jthr) { > printExceptionAndFree(env, jthr, PRINT_EXC_ALL, > "loadFileSystems"); > } > } > {code} > and here's the relevant parts of the stack trace from where I call this API > in Impala, which uses {{libhdfs}}: > {code} > #0 __lll_lock_wait () at > ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 > #1 0x74a8d657 in _L_lock_909 () from > /lib/x86_64-linux-gnu/libpthread.so.0 > #2 0x74a8d480 in __GI___pthread_mutex_lock (mutex=0x47ce960 > ) at ../nptl/pthread_mutex_lock.c:79 > #3 0x02f06056 in mutexLock (m=) at > /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c:28 > #4 0x02efe817 in setTLSExceptionStrings (rootCause=0x0, > stackTrace=0x0) at > /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:581 > #5 0x02f065d7 in printExceptionAndFreeV (env=0x513c1e8, > exc=0x508a8c0, noPrintFlags=, fmt=0x34349cf "loadFileSystems", > ap=0x7fffb660) > at > /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:183 > #6 0x02f0683d in printExceptionAndFree (env=, > exc=, noPrintFlags=, fmt=) > at > /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:213 > #7 0x02eff60f in getGlobalJNIEnv () at > /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:463 > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-5970) callers of NetworkTopology's chooseRandom method to expect null return value
[ https://issues.apache.org/jira/browse/HDFS-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021654#comment-16021654 ] zhangyubiao commented on HDFS-5970: --- [~olegd], what action you do to reproduced ? > callers of NetworkTopology's chooseRandom method to expect null return value > > > Key: HDFS-5970 > URL: https://issues.apache.org/jira/browse/HDFS-5970 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.0.0-alpha1 >Reporter: Yongjun Zhang >Priority: Minor > > Class NetworkTopology's method >public Node chooseRandom(String scope) > calls >private Node chooseRandom(String scope, String excludedScope) > which may return null value. > Callers of this method such as BlockPlacementPolicyDefault etc need to be > aware that. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11837) Backport HDFS-9710 to branch-2.7: Change DN to send block receipt IBRs in batches
[ https://issues.apache.org/jira/browse/HDFS-11837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021617#comment-16021617 ] Vinitha Reddy Gankidi commented on HDFS-11837: -- [~shv] Please take a look. I've verified that all these tests pass locally. > Backport HDFS-9710 to branch-2.7: Change DN to send block receipt IBRs in > batches > - > > Key: HDFS-11837 > URL: https://issues.apache.org/jira/browse/HDFS-11837 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Vinitha Reddy Gankidi >Assignee: Vinitha Reddy Gankidi > Attachments: HDFS-9710-branch-2.7.00.patch > > > As per discussussion in [mailling > list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser] > backport HDFS-9710 to branch-2.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10785) libhdfs++: Implement the rest of the tools
[ https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021605#comment-16021605 ] Hadoop QA commented on HDFS-10785: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 3m 7s{color} | {color:red} Docker failed to build yetus/hadoop:78fc6b6. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-10785 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869500/HDFS-10785.HDFS-8707.007.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19564/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > libhdfs++: Implement the rest of the tools > -- > > Key: HDFS-10785 > URL: https://issues.apache.org/jira/browse/HDFS-10785 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-10785.HDFS-8707.000.patch, > HDFS-10785.HDFS-8707.001.patch, HDFS-10785.HDFS-8707.002.patch, > HDFS-10785.HDFS-8707.003.patch, HDFS-10785.HDFS-8707.004.patch, > HDFS-10785.HDFS-8707.005.patch, HDFS-10785.HDFS-8707.006.patch, > HDFS-10785.HDFS-8707.007.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11866) JournalNode Sync should be off by default in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021588#comment-16021588 ] Hanisha Koneru commented on HDFS-11866: --- Thanks [~arpitagarwal] for committing the patch. > JournalNode Sync should be off by default in hdfs-default.xml > - > > Key: HDFS-11866 > URL: https://issues.apache.org/jira/browse/HDFS-11866 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11866.001.patch > > > dfs.journalnode.enable.sync is set to true in hdfs-default.xml. It should be > set to false to disable the feature by default, as discussed in HDFS-4025. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11754) Make FsServerDefaults cache configurable.
[ https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021586#comment-16021586 ] Mikhail Erofeev commented on HDFS-11754: Thanks for looking into my patch! [~shahrs87], I did almost as you suggested. I haven't figured out how to mock internals of DFSClient without extensive rewriting, so had to restart namenode many times. I'm little bit worried that these tests add ~1 second to testing time. [~surendrasingh], [~shahrs87], mind reviewing again? Thanks. > Make FsServerDefaults cache configurable. > - > > Key: HDFS-11754 > URL: https://issues.apache.org/jira/browse/HDFS-11754 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Mikhail Erofeev >Priority: Minor > Labels: newbie > Fix For: 2.9.0 > > Attachments: HDFS-11754.001.patch, HDFS-11754.002.patch, > HDFS-11754.003.patch > > > DFSClient caches the result of FsServerDefaults for 60 minutes. > But the 60 minutes time is not configurable. > Continuing the discussion from HDFS-11702, it would be nice if we can make > this configurable and make the default as 60 minutes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11859) Ozone: SCM: Separate BlockLocationProtocol from ContainerLocationProtocol
[ https://issues.apache.org/jira/browse/HDFS-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11859: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Target Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Thanks [~vagarychen] for the review. I've commit the fix to the feature branch. > Ozone: SCM: Separate BlockLocationProtocol from ContainerLocationProtocol > - > > Key: HDFS-11859 > URL: https://issues.apache.org/jira/browse/HDFS-11859 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Xiaoyu Yao > Fix For: HDFS-7240 > > Attachments: HDFS-11859-HDFS-7240.001.patch, > HDFS-11859-HDFS-7240.002.patch, HDFS-11859-HDFS-7240.003.patch, > HDFS-11859-HDFS-7240.004.patch, HDFS-11859-HDFS-7240.005.patch, > HDFS-11859-HDFS-7240.006.patch, HDFS-11859-HDFS-7240.007.patch > > > Currently StorageLocationProtcol contains two types of operations: container > related operations and block related operations. Although there is > {{ScmBlockLocationProtocol}} for block operations, only > {{StorageContainerLocationProtocolServerSideTranslatorPB}} is making the > distinguish. > This JIRA tries to make the separation complete and thorough for all places. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11754) Make FsServerDefaults cache configurable.
[ https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Erofeev updated HDFS-11754: --- Attachment: HDFS-11754.003.patch > Make FsServerDefaults cache configurable. > - > > Key: HDFS-11754 > URL: https://issues.apache.org/jira/browse/HDFS-11754 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Mikhail Erofeev >Priority: Minor > Labels: newbie > Fix For: 2.9.0 > > Attachments: HDFS-11754.001.patch, HDFS-11754.002.patch, > HDFS-11754.003.patch > > > DFSClient caches the result of FsServerDefaults for 60 minutes. > But the 60 minutes time is not configurable. > Continuing the discussion from HDFS-11702, it would be nice if we can make > this configurable and make the default as 60 minutes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11727) Block Storage: Retry Blocks should be requeued when cblock is restarted
[ https://issues.apache.org/jira/browse/HDFS-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021542#comment-16021542 ] Chen Liang commented on HDFS-11727: --- v002 patch looks pretty good to me, will commit this shortly. This is a fairly complex change, thanks [~msingh] for the contribution! > Block Storage: Retry Blocks should be requeued when cblock is restarted > --- > > Key: HDFS-11727 > URL: https://issues.apache.org/jira/browse/HDFS-11727 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Attachments: HDFS-11727-HDFS-7240.001.patch, > HDFS-11727-HDFS-7240.002.patch > > > Currently blocks which could not written to container because of some issue > are maintained in retryLog files. However these files are not requeued back > after restart. > This change will requeue retry log files on restart and would also fix some > other minor issues with retry logs and add some new counters. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11859) Ozone: SCM: Separate BlockLocationProtocol from ContainerLocationProtocol
[ https://issues.apache.org/jira/browse/HDFS-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11859: -- Summary: Ozone: SCM: Separate BlockLocationProtocol from ContainerLocationProtocol (was: Ozone : separate blockLocationProtocol out of containerLocationProtocol) > Ozone: SCM: Separate BlockLocationProtocol from ContainerLocationProtocol > - > > Key: HDFS-11859 > URL: https://issues.apache.org/jira/browse/HDFS-11859 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Xiaoyu Yao > Attachments: HDFS-11859-HDFS-7240.001.patch, > HDFS-11859-HDFS-7240.002.patch, HDFS-11859-HDFS-7240.003.patch, > HDFS-11859-HDFS-7240.004.patch, HDFS-11859-HDFS-7240.005.patch, > HDFS-11859-HDFS-7240.006.patch, HDFS-11859-HDFS-7240.007.patch > > > Currently StorageLocationProtcol contains two types of operations: container > related operations and block related operations. Although there is > {{ScmBlockLocationProtocol}} for block operations, only > {{StorageContainerLocationProtocolServerSideTranslatorPB}} is making the > distinguish. > This JIRA tries to make the separation complete and thorough for all places. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-5042) Completed files lost after power failure
[ https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021531#comment-16021531 ] Vinayakumar B commented on HDFS-5042: - Mentioned directory sync will be called on block close() if sync_on_close is configured. > Completed files lost after power failure > > > Key: HDFS-5042 > URL: https://issues.apache.org/jira/browse/HDFS-5042 > Project: Hadoop HDFS > Issue Type: Bug > Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5) >Reporter: Dave Latham >Assignee: Vinayakumar B >Priority: Critical > Attachments: HDFS-5042-01.patch, HDFS-5042-02.patch, > HDFS-5042-03.patch, HDFS-5042-branch-2-01.patch > > > We suffered a cluster wide power failure after which HDFS lost data that it > had acknowledged as closed and complete. > The client was HBase which compacted a set of HFiles into a new HFile, then > after closing the file successfully, deleted the previous versions of the > file. The cluster then lost power, and when brought back up the newly > created file was marked CORRUPT. > Based on reading the logs it looks like the replicas were created by the > DataNodes in the 'blocksBeingWritten' directory. Then when the file was > closed they were moved to the 'current' directory. After the power cycle > those replicas were again in the blocksBeingWritten directory of the > underlying file system (ext3). When those DataNodes reported in to the > NameNode it deleted those replicas and lost the file. > Some possible fixes could be having the DataNode fsync the directory(s) after > moving the block from blocksBeingWritten to current to ensure the rename is > durable or having the NameNode accept replicas from blocksBeingWritten under > certain circumstances. > Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode): > {noformat} > RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: > Creating > file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > with permission=rwxrwxrwx > NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.allocateBlock: > /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c. > blk_1395839728632046111_357084589 > DN 2013-06-29 11:16:06,832 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block > blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: > /10.0.5.237:50010 > NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to > blk_1395839728632046111_357084589 size 25418340 > NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to > blk_1395839728632046111_357084589 size 25418340 > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to > blk_1395839728632046111_357084589 size 25418340 > DN 2013-06-29 11:16:11,385 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Received block > blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327 > DN 2013-06-29 11:16:11,385 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block > blk_1395839728632046111_357084589 terminating > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing > lease on file > /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > from client DFSClient_hb_rs_hs745,60020,1372470111932 > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.completeFile: file > /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > is closed by DFSClient_hb_rs_hs745,60020,1372470111932 > RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: > Renaming compacted file at > hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > to > hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/n/6e0cc30af6e64e56ba5a539fdf159c4c > RS 2013-06-29 11:16:11,505 INFO org.apache.hadoop.hbase.regionserver.Store: > Completed major compaction of 7 file(s) in n of > users-6,\x12\xBDp\xA3,1359426311784.b5b0820cde759ae68e333b2f4015bb7e. into > 6e0cc30af6e64e56ba5a539fdf159c4c, size=24.2m; total size for store is 24.2m > --- CRASH, RESTART - > NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: addStoredBlock request received for > blk_1395839728632046111_357084589 on 10.0.6.1:50010 size 21978112
[jira] [Commented] (HDFS-11726) [SPS] : StoragePolicySatisfier should not select same storage type as source and destination in same datanode.
[ https://issues.apache.org/jira/browse/HDFS-11726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021532#comment-16021532 ] Rakesh R commented on HDFS-11726: - I'm wondering how come both src & destin storage type is {{ARCHIVE}} within same {{127.0.0.1:41699}}. Following is from your shared logs: {code} from src:127.0.0.1:41699 to destin:127.0.0.1:41699 to satisfy storageType, sourceStoragetype:ARCHIVE and destinStoragetype:ARCHIVE {code} > [SPS] : StoragePolicySatisfier should not select same storage type as source > and destination in same datanode. > -- > > Key: HDFS-11726 > URL: https://issues.apache.org/jira/browse/HDFS-11726 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > > {code} > 2017-04-30 16:12:28,569 [BlockMoverTask-0] INFO > datanode.StoragePolicySatisfyWorker (Worker.java:moveBlock(248)) - Start > moving block:blk_1073741826_1002 from src:127.0.0.1:41699 to > destin:127.0.0.1:41699 to satisfy storageType, sourceStoragetype:ARCHIVE and > destinStoragetype:ARCHIVE > {code} > {code} > 2017-04-30 16:12:28,571 [DataXceiver for client /127.0.0.1:36428 [Replacing > block BP-1409501412-127.0.1.1-1493548923222:blk_1073741826_1002 from > 6c7aa66e-a778-43d5-89f6-053d5f6b35bc]] INFO datanode.DataNode > (DataXceiver.java:replaceBlock(1202)) - opReplaceBlock > BP-1409501412-127.0.1.1-1493548923222:blk_1073741826_1002 received exception > org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Replica > FinalizedReplica, blk_1073741826_1002, FINALIZED > getNumBytes() = 1024 > getBytesOnDisk() = 1024 > getVisibleLength()= 1024 > getVolume() = > /home/sachin/software/hadoop/HDFS-10285/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data7 > getBlockURI() = > file:/home/sachin/software/hadoop/HDFS-10285/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data7/current/BP-1409501412-127.0.1.1-1493548923222/current/finalized/subdir0/subdir0/blk_1073741826 > already exists on storage ARCHIVE > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-5042) Completed files lost after power failure
[ https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B reassigned HDFS-5042: --- Assignee: Vinayakumar B > Completed files lost after power failure > > > Key: HDFS-5042 > URL: https://issues.apache.org/jira/browse/HDFS-5042 > Project: Hadoop HDFS > Issue Type: Bug > Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5) >Reporter: Dave Latham >Assignee: Vinayakumar B >Priority: Critical > Attachments: HDFS-5042-01.patch, HDFS-5042-02.patch, > HDFS-5042-03.patch, HDFS-5042-branch-2-01.patch > > > We suffered a cluster wide power failure after which HDFS lost data that it > had acknowledged as closed and complete. > The client was HBase which compacted a set of HFiles into a new HFile, then > after closing the file successfully, deleted the previous versions of the > file. The cluster then lost power, and when brought back up the newly > created file was marked CORRUPT. > Based on reading the logs it looks like the replicas were created by the > DataNodes in the 'blocksBeingWritten' directory. Then when the file was > closed they were moved to the 'current' directory. After the power cycle > those replicas were again in the blocksBeingWritten directory of the > underlying file system (ext3). When those DataNodes reported in to the > NameNode it deleted those replicas and lost the file. > Some possible fixes could be having the DataNode fsync the directory(s) after > moving the block from blocksBeingWritten to current to ensure the rename is > durable or having the NameNode accept replicas from blocksBeingWritten under > certain circumstances. > Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode): > {noformat} > RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: > Creating > file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > with permission=rwxrwxrwx > NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.allocateBlock: > /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c. > blk_1395839728632046111_357084589 > DN 2013-06-29 11:16:06,832 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block > blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: > /10.0.5.237:50010 > NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to > blk_1395839728632046111_357084589 size 25418340 > NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to > blk_1395839728632046111_357084589 size 25418340 > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to > blk_1395839728632046111_357084589 size 25418340 > DN 2013-06-29 11:16:11,385 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Received block > blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327 > DN 2013-06-29 11:16:11,385 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block > blk_1395839728632046111_357084589 terminating > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing > lease on file > /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > from client DFSClient_hb_rs_hs745,60020,1372470111932 > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.completeFile: file > /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > is closed by DFSClient_hb_rs_hs745,60020,1372470111932 > RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: > Renaming compacted file at > hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > to > hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/n/6e0cc30af6e64e56ba5a539fdf159c4c > RS 2013-06-29 11:16:11,505 INFO org.apache.hadoop.hbase.regionserver.Store: > Completed major compaction of 7 file(s) in n of > users-6,\x12\xBDp\xA3,1359426311784.b5b0820cde759ae68e333b2f4015bb7e. into > 6e0cc30af6e64e56ba5a539fdf159c4c, size=24.2m; total size for store is 24.2m > --- CRASH, RESTART - > NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: addStoredBlock request received for > blk_1395839728632046111_357084589 on 10.0.6.1:50010 size 21978112 but was > rejected: Reported as block being written but is a block of closed file. > NN 2013-06-29 12:01:19,743
[jira] [Updated] (HDFS-10785) libhdfs++: Implement the rest of the tools
[ https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-10785: - Attachment: HDFS-10785.HDFS-8707.007.patch Reattaching. Some files got excluded from the previous patch on accident. > libhdfs++: Implement the rest of the tools > -- > > Key: HDFS-10785 > URL: https://issues.apache.org/jira/browse/HDFS-10785 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-10785.HDFS-8707.000.patch, > HDFS-10785.HDFS-8707.001.patch, HDFS-10785.HDFS-8707.002.patch, > HDFS-10785.HDFS-8707.003.patch, HDFS-10785.HDFS-8707.004.patch, > HDFS-10785.HDFS-8707.005.patch, HDFS-10785.HDFS-8707.006.patch, > HDFS-10785.HDFS-8707.007.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-5042) Completed files lost after power failure
[ https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-5042: Attachment: HDFS-5042-03.patch updated patch > Completed files lost after power failure > > > Key: HDFS-5042 > URL: https://issues.apache.org/jira/browse/HDFS-5042 > Project: Hadoop HDFS > Issue Type: Bug > Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5) >Reporter: Dave Latham >Priority: Critical > Attachments: HDFS-5042-01.patch, HDFS-5042-02.patch, > HDFS-5042-03.patch, HDFS-5042-branch-2-01.patch > > > We suffered a cluster wide power failure after which HDFS lost data that it > had acknowledged as closed and complete. > The client was HBase which compacted a set of HFiles into a new HFile, then > after closing the file successfully, deleted the previous versions of the > file. The cluster then lost power, and when brought back up the newly > created file was marked CORRUPT. > Based on reading the logs it looks like the replicas were created by the > DataNodes in the 'blocksBeingWritten' directory. Then when the file was > closed they were moved to the 'current' directory. After the power cycle > those replicas were again in the blocksBeingWritten directory of the > underlying file system (ext3). When those DataNodes reported in to the > NameNode it deleted those replicas and lost the file. > Some possible fixes could be having the DataNode fsync the directory(s) after > moving the block from blocksBeingWritten to current to ensure the rename is > durable or having the NameNode accept replicas from blocksBeingWritten under > certain circumstances. > Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode): > {noformat} > RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: > Creating > file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > with permission=rwxrwxrwx > NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.allocateBlock: > /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c. > blk_1395839728632046111_357084589 > DN 2013-06-29 11:16:06,832 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block > blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: > /10.0.5.237:50010 > NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to > blk_1395839728632046111_357084589 size 25418340 > NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to > blk_1395839728632046111_357084589 size 25418340 > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to > blk_1395839728632046111_357084589 size 25418340 > DN 2013-06-29 11:16:11,385 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Received block > blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327 > DN 2013-06-29 11:16:11,385 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block > blk_1395839728632046111_357084589 terminating > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing > lease on file > /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > from client DFSClient_hb_rs_hs745,60020,1372470111932 > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.completeFile: file > /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > is closed by DFSClient_hb_rs_hs745,60020,1372470111932 > RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: > Renaming compacted file at > hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c > to > hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/n/6e0cc30af6e64e56ba5a539fdf159c4c > RS 2013-06-29 11:16:11,505 INFO org.apache.hadoop.hbase.regionserver.Store: > Completed major compaction of 7 file(s) in n of > users-6,\x12\xBDp\xA3,1359426311784.b5b0820cde759ae68e333b2f4015bb7e. into > 6e0cc30af6e64e56ba5a539fdf159c4c, size=24.2m; total size for store is 24.2m > --- CRASH, RESTART - > NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: addStoredBlock request received for > blk_1395839728632046111_357084589 on 10.0.6.1:50010 size 21978112 but was > rejected: Reported as block being written but is a block of closed file. > NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop
[jira] [Commented] (HDFS-10785) libhdfs++: Implement the rest of the tools
[ https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021505#comment-16021505 ] Hadoop QA commented on HDFS-10785: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 37s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 19s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 14s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_131. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_131. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_131. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_131. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_131. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 5s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:78fc6b6 | | JIRA Issue | HDFS-10785 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869492/HDFS-10785.HDFS-8707.006.patch | | Optional Tests | asflicense compile cc mvnsite javac unit javadoc mvninstall | | uname | Linux 58dab87cea38 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-8707 / 5be2415 | | Default Java | 1.7.0_131 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_131 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_13
[jira] [Commented] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021503#comment-16021503 ] Hudson commented on HDFS-11864: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11770 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11770/]) HDFS-11864. Document Metrics to track usage of memory for writes. (brahma: rev 52661e0912a79d1e851afc2b46c941ce952ca63f) * (edit) hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11859) Ozone : separate blockLocationProtocol out of containerLocationProtocol
[ https://issues.apache.org/jira/browse/HDFS-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021468#comment-16021468 ] Chen Liang commented on HDFS-11859: --- Thanks [~xyao] for updating the patch! +1 on v007 patch. > Ozone : separate blockLocationProtocol out of containerLocationProtocol > --- > > Key: HDFS-11859 > URL: https://issues.apache.org/jira/browse/HDFS-11859 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Xiaoyu Yao > Attachments: HDFS-11859-HDFS-7240.001.patch, > HDFS-11859-HDFS-7240.002.patch, HDFS-11859-HDFS-7240.003.patch, > HDFS-11859-HDFS-7240.004.patch, HDFS-11859-HDFS-7240.005.patch, > HDFS-11859-HDFS-7240.006.patch, HDFS-11859-HDFS-7240.007.patch > > > Currently StorageLocationProtcol contains two types of operations: container > related operations and block related operations. Although there is > {{ScmBlockLocationProtocol}} for block operations, only > {{StorageContainerLocationProtocolServerSideTranslatorPB}} is making the > distinguish. > This JIRA tries to make the separation complete and thorough for all places. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10785) libhdfs++: Implement the rest of the tools
[ https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-10785: - Attachment: HDFS-10785.HDFS-8707.006.patch New patch (rebased on HDFS-8707). > libhdfs++: Implement the rest of the tools > -- > > Key: HDFS-10785 > URL: https://issues.apache.org/jira/browse/HDFS-10785 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-10785.HDFS-8707.000.patch, > HDFS-10785.HDFS-8707.001.patch, HDFS-10785.HDFS-8707.002.patch, > HDFS-10785.HDFS-8707.003.patch, HDFS-10785.HDFS-8707.004.patch, > HDFS-10785.HDFS-8707.005.patch, HDFS-10785.HDFS-8707.006.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11771) Ozone: KSM: Add checkVolumeAccess
[ https://issues.apache.org/jira/browse/HDFS-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021418#comment-16021418 ] Hadoop QA commented on HDFS-11771: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 58s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 30s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}149m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.cblock.TestCBlockCLI | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.cblock.TestCBlockServerPersistence | | | hadoop.cblock.TestBufferManager | | | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | | | hadoop.hdfs.server.namenode.TestEditLogRace | | Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11771 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869460/HDFS-11771-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 77fd917779dd 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/
[jira] [Updated] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-11864: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.1 3.0.0-alpha3 2.7.4 2.9.0 Status: Resolved (was: Patch Available) Committed to {{trunk}},{{branch-2}},{{branch-2.8.1}} and {{branch-2.7}}..Patch cleanly applies all branches..[~linyiqun] for your contribution. > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021372#comment-16021372 ] Brahma Reddy Battula commented on HDFS-11864: - +1, on {{trunk}} patch, will commit shortly. > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11778) Ozone: KSM: add getBucketInfo
[ https://issues.apache.org/jira/browse/HDFS-11778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021360#comment-16021360 ] Nandakumar commented on HDFS-11778: --- Thanks for the review [~msingh]. BuckerManagerImpl: 85, 129 please correct the comment -> moved it to MetadataManagerImpl.java KeySpaceManager:366, please add a new metric, and update it here -> done KeySpaceManagerProtocol:114, variable name in comments are not correct -> corrected BucketManagerImpl:97 - the acls during creation should be fetched from the KsmBucketInfo. -> KsmBucketInfo is created at this point with the values from KsmBucketArgs, for acls the value is read from KsmBucketArgs#addAcls It would be great if we can add a test to TestKeySpaceManager, this will help with end to end testing for all the APIs. -> updated TestKeySpaceManager with a new test case for verifying creation and retrieving of bucket info. > Ozone: KSM: add getBucketInfo > - > > Key: HDFS-11778 > URL: https://issues.apache.org/jira/browse/HDFS-11778 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Nandakumar > Attachments: HDFS-11778-HDFS-7240.000.patch, > HDFS-11778-HDFS-7240.001.patch > > > Returns the bucket information if the bucket exists. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11778) Ozone: KSM: add getBucketInfo
[ https://issues.apache.org/jira/browse/HDFS-11778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-11778: -- Attachment: HDFS-11778-HDFS-7240.001.patch > Ozone: KSM: add getBucketInfo > - > > Key: HDFS-11778 > URL: https://issues.apache.org/jira/browse/HDFS-11778 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Nandakumar > Attachments: HDFS-11778-HDFS-7240.000.patch, > HDFS-11778-HDFS-7240.001.patch > > > Returns the bucket information if the bucket exists. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11771) Ozone: KSM: Add checkVolumeAccess
[ https://issues.apache.org/jira/browse/HDFS-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-11771: - Status: Patch Available (was: Open) > Ozone: KSM: Add checkVolumeAccess > -- > > Key: HDFS-11771 > URL: https://issues.apache.org/jira/browse/HDFS-11771 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Mukul Kumar Singh > Attachments: HDFS-11771-HDFS-7240.001.patch > > > Checks if the caller has access to a given volume. This call supports the > ACLs specified in the ozone rest protocol documentation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11771) Ozone: KSM: Add checkVolumeAccess
[ https://issues.apache.org/jira/browse/HDFS-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-11771: - Attachment: HDFS-11771-HDFS-7240.001.patch > Ozone: KSM: Add checkVolumeAccess > -- > > Key: HDFS-11771 > URL: https://issues.apache.org/jira/browse/HDFS-11771 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Mukul Kumar Singh > Attachments: HDFS-11771-HDFS-7240.001.patch > > > Checks if the caller has access to a given volume. This call supports the > ACLs specified in the ozone rest protocol documentation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11639) [READ] Encode the BlockAlias in the client protocol
[ https://issues.apache.org/jira/browse/HDFS-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021142#comment-16021142 ] Hadoop QA commented on HDFS-11639: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 4s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 30s{color} | {color:green} HDFS-9806 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 30s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} HDFS-9806 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 55s{color} | {color:orange} hadoop-hdfs-project: The patch generated 14 new + 1188 unchanged - 7 fixed = 1202 total (was 1195) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 51s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}150m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11639 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869427/HDFS-11639-HDFS-9806.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 760e177c73e7 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-9806 / 5d021f3 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.
[jira] [Updated] (HDFS-11794) Add ec sub command -listCodec to show currently supported ec codecs
[ https://issues.apache.org/jira/browse/HDFS-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-11794: Resolution: Fixed Fix Version/s: 3.0.0-alpha3 Target Version/s: (was: 3.0.0-alpha3) Status: Resolved (was: Patch Available) > Add ec sub command -listCodec to show currently supported ec codecs > --- > > Key: HDFS-11794 > URL: https://issues.apache.org/jira/browse/HDFS-11794 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11794.001.patch, HDFS-11794.002.patch, > HDFS-11794.003.patch > > > Add ec sub command -listCodec to show currently supported ec codecs -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11794) Add ec sub command -listCodec to show currently supported ec codecs
[ https://issues.apache.org/jira/browse/HDFS-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021047#comment-16021047 ] Rakesh R edited comment on HDFS-11794 at 5/23/17 12:11 PM: --- Thanks [~Sammi] for the contribution. Thanks [~drankye] for the implementation thoughts. +1 LGTM, I'll commit the patch shortly to trunk. was (Author: rakeshr): Thanks [~Sammi] for the contribution. +1 LGTM, I'll commit the patch shortly to trunk. > Add ec sub command -listCodec to show currently supported ec codecs > --- > > Key: HDFS-11794 > URL: https://issues.apache.org/jira/browse/HDFS-11794 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11794.001.patch, HDFS-11794.002.patch, > HDFS-11794.003.patch > > > Add ec sub command -listCodec to show currently supported ec codecs -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11794) Add ec sub command -listCodec to show currently supported ec codecs
[ https://issues.apache.org/jira/browse/HDFS-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021105#comment-16021105 ] Hudson commented on HDFS-11794: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11769 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11769/]) HDFS-11794. Add ec sub command -listCodec to show currently supported ec (rakeshr: rev 1b5451bf054c335188e4cd66f7b4a1d80013e86d) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecRegistry.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java > Add ec sub command -listCodec to show currently supported ec codecs > --- > > Key: HDFS-11794 > URL: https://issues.apache.org/jira/browse/HDFS-11794 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11794.001.patch, HDFS-11794.002.patch, > HDFS-11794.003.patch > > > Add ec sub command -listCodec to show currently supported ec codecs -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11794) Add ec sub command -listCodec to show currently supported ec codecs
[ https://issues.apache.org/jira/browse/HDFS-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021047#comment-16021047 ] Rakesh R commented on HDFS-11794: - Thanks [~Sammi] for the contribution. +1 LGTM, I'll commit the patch shortly to trunk. > Add ec sub command -listCodec to show currently supported ec codecs > --- > > Key: HDFS-11794 > URL: https://issues.apache.org/jira/browse/HDFS-11794 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11794.001.patch, HDFS-11794.002.patch, > HDFS-11794.003.patch > > > Add ec sub command -listCodec to show currently supported ec codecs -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11727) Block Storage: Retry Blocks should be requeued when cblock is restarted
[ https://issues.apache.org/jira/browse/HDFS-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021014#comment-16021014 ] Hadoop QA commented on HDFS-11727: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 6s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | Timed out junit tests | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11727 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869409/HDFS-11727-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux bbda729910ed 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 3ff857f | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/19558/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19558/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19558/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores
[ https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020979#comment-16020979 ] Ewan Higgs commented on HDFS-11640: --- I ran into compilation errors when trying to build this. I suppose I should wait until HDFS-6984 and HDFS-7878 are merged before reviewing. > [READ] Datanodes should use a unique identifier when reading from external > stores > - > > Key: HDFS-11640 > URL: https://issues.apache.org/jira/browse/HDFS-11640 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti > Attachments: HDFS-11640-HDFS-9806.001.patch > > > Use a unique identifier when reading from external stores to ensure that > datanodes read the correct (version of) file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11871) balance include Parameter Usage Error
[ https://issues.apache.org/jira/browse/HDFS-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020977#comment-16020977 ] Weiwei Yang commented on HDFS-11871: Hi [~kevy] Thanks for reporting this. I just have a quick look at this but the usage seems correct to me. So it takes either 1) {{-f }}, this option let you specify a file which is parsed by the regex {{[ \t\n\f\r]+}} or 2) {{}}, this option let you specify a comma separated list of hosts as arguments, this is parsed by {{StringUtils.getTrimmedStrings()}}. Am I missing anything? > balance include Parameter Usage Error > - > > Key: HDFS-11871 > URL: https://issues.apache.org/jira/browse/HDFS-11871 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.3 >Reporter: kevy liu >Assignee: Weiwei Yang >Priority: Trivial > > [hadoop@bigdata-hdp-apache505 hadoop-2.7.2]$ bin/hdfs balancer -h > Usage: hdfs balancer > [-policy ] the balancing policy: datanode or blockpool > [-threshold ]Percentage of disk capacity > [-exclude [-f | ]] > Excludes the specified datanodes. > [-include [-f | ]] > Includes only the specified datanodes. > [-idleiterations ] Number of consecutive idle > iterations (-1 for Infinite) before exit. > Parameter Description: > -f | > The parse separator in the code is: > String[] nodes = line.split("[ \t\n\f\r]+"); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020973#comment-16020973 ] Hadoop QA commented on HDFS-11864: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11864 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869425/HDFS-11864.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 6e90f2a79553 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d0f346a | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19559/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11871) balance include Parameter Usage Error
[ https://issues.apache.org/jira/browse/HDFS-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang reassigned HDFS-11871: -- Assignee: Weiwei Yang > balance include Parameter Usage Error > - > > Key: HDFS-11871 > URL: https://issues.apache.org/jira/browse/HDFS-11871 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.3 >Reporter: kevy liu >Assignee: Weiwei Yang >Priority: Trivial > > [hadoop@bigdata-hdp-apache505 hadoop-2.7.2]$ bin/hdfs balancer -h > Usage: hdfs balancer > [-policy ] the balancing policy: datanode or blockpool > [-threshold ]Percentage of disk capacity > [-exclude [-f | ]] > Excludes the specified datanodes. > [-include [-f | ]] > Includes only the specified datanodes. > [-idleiterations ] Number of consecutive idle > iterations (-1 for Infinite) before exit. > Parameter Description: > -f | > The parse separator in the code is: > String[] nodes = line.split("[ \t\n\f\r]+"); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11846) Ozone: Potential http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020961#comment-16020961 ] Weiwei Yang edited comment on HDFS-11846 at 5/23/17 9:46 AM: - This patch includes following fixes # Fixed http connection leaks in {{OzoneVolume}}, {{OzoneBuckets}} and {{OzoneClient}}, ensures to cleanup both http client and http requests properly. # Added a test case in {{TestVolume#testCreateVolume}} to verify connections are properly closed. Note, this patch doesn't cover all places that have leaks, but it is easy to replica this case to other places if necessary. As the code path is really similar, we might want to add one case for each class. # This patch still creates a new HttpClient each time when talking to ozone server, because I found there are some problems in netty server side {{RequestDispatchObjectStoreChannelHandler}} and {{RequestContentObjectStoreChannelHandler}}, it cannot handle multiple requests from same http client. I don't know how to fix that yet, may need another jira for that. I will wait for review comments before submitting a new patch. Thanks. was (Author: cheersyang): This patch includes following fixes # Fixed http connection leaks in {{OzoneVolume}}, {{OzoneBuckets}} and {{OzoneClient}}, ensures to cleanup both http client and http requests properly. # Added a test case in {{TestVolume#testCreateVolume}} to verify connections are properly closed. Note, this patch doesn't cover all places that have leaks, but it is easy to replica this case to other places if necessary. As the code path is really similar, we might want to add one case for each class. # This patch still creates a new HttpClient each time when talking to ozone server, because I found there are some problems in netty server side {{RequestDispatchObjectStoreChannelHandler}} and {{RequestContentObjectStoreChannelHandler}}, it cannot handle multiple requests from same http client. I don't know how to fix that yet, may need another jira for that. > Ozone: Potential http connection leaks in ozone clients > --- > > Key: HDFS-11846 > URL: https://issues.apache.org/jira/browse/HDFS-11846 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11846-HDFS-7240.001.patch > > > There are several problems > # Http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are > created per request, per [Reuse of HttpClient > instance|http://hc.apache.org/httpclient-3.x/performance.html#Reuse_of_HttpClient_instance] > doc, proposed to reuse the http client instance to reduce the over head. > # Some resources in these classes were not properly cleaned up. E.g the http > connection, HttpGet/HttpPost requests. > > This jira's purpose is to fix these issues and investigate how we can improve > the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11871) balance include Parameter Usage Error
[ https://issues.apache.org/jira/browse/HDFS-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kevy liu updated HDFS-11871: Summary: balance include Parameter Usage Error (was: Parameter Usage Error) > balance include Parameter Usage Error > - > > Key: HDFS-11871 > URL: https://issues.apache.org/jira/browse/HDFS-11871 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.3 >Reporter: kevy liu >Priority: Trivial > > [hadoop@bigdata-hdp-apache505 hadoop-2.7.2]$ bin/hdfs balancer -h > Usage: hdfs balancer > [-policy ] the balancing policy: datanode or blockpool > [-threshold ]Percentage of disk capacity > [-exclude [-f | ]] > Excludes the specified datanodes. > [-include [-f | ]] > Includes only the specified datanodes. > [-idleiterations ] Number of consecutive idle > iterations (-1 for Infinite) before exit. > Parameter Description: > -f | > The parse separator in the code is: > String[] nodes = line.split("[ \t\n\f\r]+"); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11871) Parameter Usage Error
[ https://issues.apache.org/jira/browse/HDFS-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kevy liu updated HDFS-11871: Status: Open (was: Patch Available) > Parameter Usage Error > -- > > Key: HDFS-11871 > URL: https://issues.apache.org/jira/browse/HDFS-11871 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.3 >Reporter: kevy liu >Priority: Trivial > > [hadoop@bigdata-hdp-apache505 hadoop-2.7.2]$ bin/hdfs balancer -h > Usage: hdfs balancer > [-policy ] the balancing policy: datanode or blockpool > [-threshold ]Percentage of disk capacity > [-exclude [-f | ]] > Excludes the specified datanodes. > [-include [-f | ]] > Includes only the specified datanodes. > [-idleiterations ] Number of consecutive idle > iterations (-1 for Infinite) before exit. > Parameter Description: > -f | > The parse separator in the code is: > String[] nodes = line.split("[ \t\n\f\r]+"); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11846) Ozone: Potential http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020961#comment-16020961 ] Weiwei Yang commented on HDFS-11846: This patch includes following fixes # Fixed http connection leaks in {{OzoneVolume}}, {{OzoneBuckets}} and {{OzoneClient}}, ensures to cleanup both http client and http requests properly. # Added a test case in {{TestVolume#testCreateVolume}} to verify connections are properly closed. Note, this patch doesn't cover all places that have leaks, but it is easy to replica this case to other places if necessary. As the code path is really similar, we might want to add one case for each class. # This patch still creates a new HttpClient each time when talking to ozone server, because I found there are some problems in netty server side {{RequestDispatchObjectStoreChannelHandler}} and {{RequestContentObjectStoreChannelHandler}}, it cannot handle multiple requests from same http client. I don't know how to fix that yet, may need another jira for that. > Ozone: Potential http connection leaks in ozone clients > --- > > Key: HDFS-11846 > URL: https://issues.apache.org/jira/browse/HDFS-11846 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11846-HDFS-7240.001.patch > > > There are several problems > # Http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are > created per request, per [Reuse of HttpClient > instance|http://hc.apache.org/httpclient-3.x/performance.html#Reuse_of_HttpClient_instance] > doc, proposed to reuse the http client instance to reduce the over head. > # Some resources in these classes were not properly cleaned up. E.g the http > connection, HttpGet/HttpPost requests. > > This jira's purpose is to fix these issues and investigate how we can improve > the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11871) Parameter Usage Error
[ https://issues.apache.org/jira/browse/HDFS-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kevy liu updated HDFS-11871: Status: Patch Available (was: Open) > Parameter Usage Error > -- > > Key: HDFS-11871 > URL: https://issues.apache.org/jira/browse/HDFS-11871 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.3 >Reporter: kevy liu >Priority: Trivial > > [hadoop@bigdata-hdp-apache505 hadoop-2.7.2]$ bin/hdfs balancer -h > Usage: hdfs balancer > [-policy ] the balancing policy: datanode or blockpool > [-threshold ]Percentage of disk capacity > [-exclude [-f | ]] > Excludes the specified datanodes. > [-include [-f | ]] > Includes only the specified datanodes. > [-idleiterations ] Number of consecutive idle > iterations (-1 for Infinite) before exit. > Parameter Description: > -f | > The parse separator in the code is: > String[] nodes = line.split("[ \t\n\f\r]+"); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11639) [READ] Encode the BlockAlias in the client protocol
[ https://issues.apache.org/jira/browse/HDFS-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-11639: -- Attachment: HDFS-11639-HDFS-9806.005.patch Attaching a patch that removes the {{BlockAlias}} from the {{readBlocks}} operation. The {{BlockAlias}} is only required in the {{writeBlocks}} and {{transferBlocks}} calls. > [READ] Encode the BlockAlias in the client protocol > --- > > Key: HDFS-11639 > URL: https://issues.apache.org/jira/browse/HDFS-11639 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Ewan Higgs >Assignee: Ewan Higgs > Attachments: HDFS-11639-HDFS-9806.001.patch, > HDFS-11639-HDFS-9806.002.patch, HDFS-11639-HDFS-9806.003.patch, > HDFS-11639-HDFS-9806.004.patch, HDFS-11639-HDFS-9806.005.patch > > > As part of the {{PROVIDED}} storage type, we have a {{BlockAlias}} type which > encodes information about where the data comes from. i.e. URI, offset, > length, and nonce value. This data should be encoded in the protocol > ({{LocatedBlockProto}} and the {{BlockTokenIdentifier}}) when a block is > available using the PROVIDED storage type. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11871) Parameter Usage Error
kevy liu created HDFS-11871: --- Summary: Parameter Usage Error Key: HDFS-11871 URL: https://issues.apache.org/jira/browse/HDFS-11871 Project: Hadoop HDFS Issue Type: Bug Components: balancer & mover Affects Versions: 2.7.3 Reporter: kevy liu Priority: Trivial [hadoop@bigdata-hdp-apache505 hadoop-2.7.2]$ bin/hdfs balancer -h Usage: hdfs balancer [-policy ] the balancing policy: datanode or blockpool [-threshold ]Percentage of disk capacity [-exclude [-f | ]] Excludes the specified datanodes. [-include [-f | ]] Includes only the specified datanodes. [-idleiterations ] Number of consecutive idle iterations (-1 for Infinite) before exit. Parameter Description: -f | The parse separator in the code is: String[] nodes = line.split("[ \t\n\f\r]+"); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11859) Ozone : separate blockLocationProtocol out of containerLocationProtocol
[ https://issues.apache.org/jira/browse/HDFS-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020951#comment-16020951 ] Hadoop QA commented on HDFS-11859: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 33s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 29s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}108m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation | | | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11859 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869407/HDFS-11859-HDFS-7240.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle cc | | uname | Linux 2d94bf5fe624 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64
[jira] [Commented] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020952#comment-16020952 ] Brahma Reddy Battula commented on HDFS-11864: - bq.Brahma Reddy Battula, the trunk, branch-2 and branch-2.8 are also missing these metrics in documenation, and should be updated, right? We would be better not only fixed this in branch-2.7. Yes, Need to update for all branches.. > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11864: - Status: Patch Available (was: Open) > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11864) Document Metrics to track usage of memory for writes
[ https://issues.apache.org/jira/browse/HDFS-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11864: - Attachment: HDFS-11864.001.patch Attach the initial patch. The metrics description based on the infos that given in HDFS-7129. Thanks for the review. [~brahmareddy], the trunk, branch-2 and branch-2.8 are also missing these metrics in documenation, and should be updated, right? We would be better not only fixed this in branch-2.7. > Document Metrics to track usage of memory for writes > -- > > Key: HDFS-11864 > URL: https://issues.apache.org/jira/browse/HDFS-11864 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Brahma Reddy Battula >Assignee: Yiqun Lin > Attachments: HDFS-11864.001.patch > > > HDFS-7129 introduced followings metrics which are not documented. > {noformat} > // RamDisk metrics on read/write > @Metric MutableCounterLong ramDiskBlocksWrite; > @Metric MutableCounterLong ramDiskBlocksWriteFallback; > @Metric MutableCounterLong ramDiskBytesWrite; > @Metric MutableCounterLong ramDiskBlocksReadHits; > > // RamDisk metrics on eviction > @Metric MutableCounterLong ramDiskBlocksEvicted; > @Metric MutableCounterLong ramDiskBlocksEvictedWithoutRead; > @Metric MutableRateramDiskBlocksEvictionWindowMs; > final MutableQuantiles[] ramDiskBlocksEvictionWindowMsQuantiles; > > > // RamDisk metrics on lazy persist > @Metric MutableCounterLong ramDiskBlocksLazyPersisted; > @Metric MutableCounterLong ramDiskBlocksDeletedBeforeLazyPersisted; > @Metric MutableCounterLong ramDiskBytesLazyPersisted; > @Metric MutableRateramDiskBlocksLazyPersistWindowMs; > final MutableQuantiles[] ramDiskBlocksLazyPersistWindowMsQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11846) Ozone: Potential http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020932#comment-16020932 ] Hadoop QA commented on HDFS-11846: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 4 unchanged - 0 fixed = 6 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | | | hadoop.cblock.TestBufferManager | | Timed out junit tests | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11846 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869403/HDFS-11846-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux dba1495ed8a8 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 3ff857f | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/19556/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19556/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19556/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19556/te
[jira] [Commented] (HDFS-9905) WebHdfsFileSystem#runWithRetry should display original stack trace on error
[ https://issues.apache.org/jira/browse/HDFS-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020824#comment-16020824 ] Brahma Reddy Battula commented on HDFS-9905: Note: it's committed to {{2.7.3}} and {{CHANGES.txt}} is not updated.Bytheway I updated the fix version with {{2.7.3}}. > WebHdfsFileSystem#runWithRetry should display original stack trace on error > --- > > Key: HDFS-9905 > URL: https://issues.apache.org/jira/browse/HDFS-9905 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 2.7.3 >Reporter: Kihwal Lee >Assignee: Wei-Chiu Chuang > Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1 > > Attachments: HDFS-9905.001.patch, HDFS-9905.002.patch, > HDFS-9905-branch-2.7.002.patch > > > When checking for a timeout in {{TestWebHdfsTimeouts}}, it does get > {{SocketTimeoutException}}, but the message sometimes does not contain > "connect timed out". Since the original exception is not logged, we do not > know details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9905) WebHdfsFileSystem#runWithRetry should display original stack trace on error
[ https://issues.apache.org/jira/browse/HDFS-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-9905: --- Fix Version/s: 2.7.3 > WebHdfsFileSystem#runWithRetry should display original stack trace on error > --- > > Key: HDFS-9905 > URL: https://issues.apache.org/jira/browse/HDFS-9905 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 2.7.3 >Reporter: Kihwal Lee >Assignee: Wei-Chiu Chuang > Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1 > > Attachments: HDFS-9905.001.patch, HDFS-9905.002.patch, > HDFS-9905-branch-2.7.002.patch > > > When checking for a timeout in {{TestWebHdfsTimeouts}}, it does get > {{SocketTimeoutException}}, but the message sometimes does not contain > "connect timed out". Since the original exception is not logged, we do not > know details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11778) Ozone: KSM: add getBucketInfo
[ https://issues.apache.org/jira/browse/HDFS-11778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020818#comment-16020818 ] Mukul Kumar Singh commented on HDFS-11778: -- Hi [~nandakumar131], Thanks for the patch. Please find my comments as follows 1) BuckerManagerImpl: 85, 129 please correct the comment, rather I feel we should move this to MetadataManagerImpl.java where the key is being constructed. 2) KeySpaceManager:366, please add a new metric, and update it here 3) KeySpaceManagerProtocol:114, variable name in comments are not correct 4) BucketManagerImpl:97 - the acls during creation should be fetched from the KsmBucketInfo. 5) It would be great if we can add a test to TestKeySpaceManager, this will help with end to end testing for all the APIs. > Ozone: KSM: add getBucketInfo > - > > Key: HDFS-11778 > URL: https://issues.apache.org/jira/browse/HDFS-11778 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Nandakumar > Attachments: HDFS-11778-HDFS-7240.000.patch > > > Returns the bucket information if the bucket exists. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-7337) Configurable and pluggable Erasure Codec and schema
[ https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020781#comment-16020781 ] SammiChen edited comment on HDFS-7337 at 5/23/17 8:07 AM: -- Thanks [~eddyxu] and [~andrew.wang] for the discussion and feedback! Agree that a CLI command to enable/disable erasure coding policy will be very helpful to end user. HDFS-11870 is created to track this. I will move on with the implementation. was (Author: sammi): Thanks [~eddyxu] and [~andrew.wang] for the discussion and feedback! Agree that a CLI command to enable/disable erasure coding policy will be very helpful to end user. HDFS-11870 is created to track this. I will move one with the implementation. > Configurable and pluggable Erasure Codec and schema > --- > > Key: HDFS-7337 > URL: https://issues.apache.org/jira/browse/HDFS-7337 > Project: Hadoop HDFS > Issue Type: New Feature > Components: erasure-coding >Reporter: Zhe Zhang >Priority: Critical > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-7337-prototype-v1.patch, > HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, > PluggableErasureCodec.pdf, PluggableErasureCodec-v2.pdf, > PluggableErasureCodec-v3.pdf, PluggableErasureCodec v4.pdf > > > According to HDFS-7285 and the design, this considers to support multiple > Erasure Codecs via pluggable approach. It allows to define and configure > multiple codec schemas with different coding algorithms and parameters. The > resultant codec schemas can be utilized and specified via command tool for > different file folders. While design and implement such pluggable framework, > it’s also to implement a concrete codec by default (Reed Solomon) to prove > the framework is useful and workable. Separate JIRA could be opened for the > RS codec implementation. > Note HDFS-7353 will focus on the very low level codec API and implementation > to make concrete vendor libraries transparent to the upper layer. This JIRA > focuses on high level stuffs that interact with configuration, schema and etc. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7337) Configurable and pluggable Erasure Codec and schema
[ https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020781#comment-16020781 ] SammiChen commented on HDFS-7337: - Thanks [~eddyxu] and [~andrew.wang] for the discussion and feedback! Agree that a CLI command to enable/disable erasure coding policy will be very helpful to end user. HDFS-11870 is created to track this. I will move one with the implementation. > Configurable and pluggable Erasure Codec and schema > --- > > Key: HDFS-7337 > URL: https://issues.apache.org/jira/browse/HDFS-7337 > Project: Hadoop HDFS > Issue Type: New Feature > Components: erasure-coding >Reporter: Zhe Zhang >Priority: Critical > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-7337-prototype-v1.patch, > HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, > PluggableErasureCodec.pdf, PluggableErasureCodec-v2.pdf, > PluggableErasureCodec-v3.pdf, PluggableErasureCodec v4.pdf > > > According to HDFS-7285 and the design, this considers to support multiple > Erasure Codecs via pluggable approach. It allows to define and configure > multiple codec schemas with different coding algorithms and parameters. The > resultant codec schemas can be utilized and specified via command tool for > different file folders. While design and implement such pluggable framework, > it’s also to implement a concrete codec by default (Reed Solomon) to prove > the framework is useful and workable. Separate JIRA could be opened for the > RS codec implementation. > Note HDFS-7353 will focus on the very low level codec API and implementation > to make concrete vendor libraries transparent to the upper layer. This JIRA > focuses on high level stuffs that interact with configuration, schema and etc. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org