[jira] [Updated] (HDFS-13080) Ozone: Make finalhash in ContainerInfo of StorageContainerDatanodeProtocol.proto optional
[ https://issues.apache.org/jira/browse/HDFS-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-13080: Attachment: HDFS-13080-HDFS-7240.001.patch > Ozone: Make finalhash in ContainerInfo of > StorageContainerDatanodeProtocol.proto optional > - > > Key: HDFS-13080 > URL: https://issues.apache.org/jira/browse/HDFS-13080 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-13080-HDFS-7240.000.patch, > HDFS-13080-HDFS-7240.001.patch > > > ContainerInfo in StorageContainerDatanodeProtocol.proto has a required field, > {{finalhash}} which will be null for an open container, this has to be made > as an optional field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13080) Ozone: Make finalhash in ContainerInfo of StorageContainerDatanodeProtocol.proto optional
[ https://issues.apache.org/jira/browse/HDFS-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344644#comment-16344644 ] Elek, Marton commented on HDFS-13080: - Fix me If I am wrong, but I think even with the optional flag an NPE could occur as null shouldn't be set to the protobuf field. I propose to fix the ContainerManagerImpl.java as well. I have a local patch for that, I am uploading it to here, to explain what I think, but feel free to ignore if you don't agree. > Ozone: Make finalhash in ContainerInfo of > StorageContainerDatanodeProtocol.proto optional > - > > Key: HDFS-13080 > URL: https://issues.apache.org/jira/browse/HDFS-13080 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-13080-HDFS-7240.000.patch, > HDFS-13080-HDFS-7240.001.patch > > > ContainerInfo in StorageContainerDatanodeProtocol.proto has a required field, > {{finalhash}} which will be null for an open container, this has to be made > as an optional field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12564) Add the documents of swebhdfs configurations on the client side
[ https://issues.apache.org/jira/browse/HDFS-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344620#comment-16344620 ] genericqa commented on HDFS-12564: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 30m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12564 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908276/HDFS-12564.3.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 0db292d3a703 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dbb9dde | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 340 (vs. ulimit of 5000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-tools/hadoop-distcp U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/22883/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add the documents of swebhdfs configurations on the client side > --- > > Key: HDFS-12564 > URL: https://issues.apache.org/jira/browse/HDFS-12564 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, webhdfs >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Attachments: HDFS-12564.1.patch, HDFS-12564.2.patch, > HDFS-12564.3.patch > > > Documentation does not cover the swebhdfs configurations on the client side. > We can reuse the hftp/hsftp documents which was removed from Hadoop-3.0 in > HDFS-5570, HDFS-9640. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12508) Refactor TestFsck to separate EC related unit tests
[ https://issues.apache.org/jira/browse/HDFS-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344610#comment-16344610 ] Takanobu Asanuma commented on HDFS-12508: - Uploaded the 2nd patch which is merged the latest trunk branch. > Refactor TestFsck to separate EC related unit tests > --- > > Key: HDFS-12508 > URL: https://issues.apache.org/jira/browse/HDFS-12508 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Minor > Attachments: HDFS-12508.1.patch, HDFS-12508.2.patch > > > Since {{TestFsck}} is large, separating EC related unit tests would be good > for easy maintenance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12508) Refactor TestFsck to separate EC related unit tests
[ https://issues.apache.org/jira/browse/HDFS-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-12508: Attachment: HDFS-12508.2.patch > Refactor TestFsck to separate EC related unit tests > --- > > Key: HDFS-12508 > URL: https://issues.apache.org/jira/browse/HDFS-12508 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Minor > Attachments: HDFS-12508.1.patch, HDFS-12508.2.patch > > > Since {{TestFsck}} is large, separating EC related unit tests would be good > for easy maintenance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12564) Add the documents of swebhdfs configurations on the client side
[ https://issues.apache.org/jira/browse/HDFS-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344583#comment-16344583 ] Takanobu Asanuma commented on HDFS-12564: - Hi, [~xyao]. I'm sorry for very late response. Thanks for your review and comments. I uploaded a new patch based on your advice. > Line 423: suggest adding a separate section and put the content(links) under > it. Done in the latest patch. > This page is for HTTPFS. To avoid confusion, I would suggest we add a > detailed ssl-client.xml example instead of linking it to > Swebhdfs document. Since HttpFS provides WebHDFS interface, the clients of HttpFS follow WebHDFS REST API. It is not different between "HttpFS over SSL" and "WebHDFS over SSL" from the perspective of the clients. It might be a little redundant to add the same {{ssl-client.xml}} example. The latest patch still uses the link but also adds more sentence to avoid confusion. How does look that? > Line 161: /etc/hadoop/hdfs-site.xml has a configuration key to enable secure > http, > i.e., dfs.http.policy=HTTPS_ONLY > Also note that dfs.http.policy is not for swebhdfs only. This will also > affect all the HTTP endpoints of HDFS such as the NN, DN WebUI, JMX, QJM. > We also need to document settings for the server side settings, e.g., > ssl-server.xml. {{dfs.http.policy=HTTPS_ONLY}} and {{ssl-server.xml}} are on the server side settings and clients don't use it. Since {{Webhdfs.md}}(WebHDFS REST API) is focusing on the client side, I think we should not write it in {{Webhdfs.md}}. I agree to adding the documents for the server side settings. That seems to be missing from the current community documents. How about doing it in HDFS-12736? > Line 198: suggest give a full path: ssl-client.xml -> > /etc/hadoop/ssl-client.xml I tried some experiments and it seems that {{hadoop.ssl.client.conf}} reqires only relative path, not absolute path. So {{hadoop.ssl.client.conf=/etc/hadoop/ssl-client.xml}} doesn't work well. The latest patch includes the explanation. > Add the documents of swebhdfs configurations on the client side > --- > > Key: HDFS-12564 > URL: https://issues.apache.org/jira/browse/HDFS-12564 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, webhdfs >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Attachments: HDFS-12564.1.patch, HDFS-12564.2.patch, > HDFS-12564.3.patch > > > Documentation does not cover the swebhdfs configurations on the client side. > We can reuse the hftp/hsftp documents which was removed from Hadoop-3.0 in > HDFS-5570, HDFS-9640. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12564) Add the documents of swebhdfs configurations on the client side
[ https://issues.apache.org/jira/browse/HDFS-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-12564: Attachment: HDFS-12564.3.patch > Add the documents of swebhdfs configurations on the client side > --- > > Key: HDFS-12564 > URL: https://issues.apache.org/jira/browse/HDFS-12564 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, webhdfs >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Attachments: HDFS-12564.1.patch, HDFS-12564.2.patch, > HDFS-12564.3.patch > > > Documentation does not cover the swebhdfs configurations on the client side. > We can reuse the hftp/hsftp documents which was removed from Hadoop-3.0 in > HDFS-5570, HDFS-9640. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13068) RBF: Add router admin option to manage safe mode
[ https://issues.apache.org/jira/browse/HDFS-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13068: - Status: Patch Available (was: Open) > RBF: Add router admin option to manage safe mode > > > Key: HDFS-13068 > URL: https://issues.apache.org/jira/browse/HDFS-13068 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13068.001.patch > > > HDFS-13044 adds a safe mode to reject requests. We should have an option to > manually set the Router into safe mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13068) RBF: Add router admin option to manage safe mode
[ https://issues.apache.org/jira/browse/HDFS-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13068: - Attachment: HDFS-13068.001.patch > RBF: Add router admin option to manage safe mode > > > Key: HDFS-13068 > URL: https://issues.apache.org/jira/browse/HDFS-13068 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13068.001.patch > > > HDFS-13044 adds a safe mode to reject requests. We should have an option to > manually set the Router into safe mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13068) RBF: Add router admin option to manage safe mode
[ https://issues.apache.org/jira/browse/HDFS-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13068: - Priority: Major (was: Minor) Attach the patch. The patch changes a lot, change priority to MAJOR. > RBF: Add router admin option to manage safe mode > > > Key: HDFS-13068 > URL: https://issues.apache.org/jira/browse/HDFS-13068 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Yiqun Lin >Priority: Major > > HDFS-13044 adds a safe mode to reject requests. We should have an option to > manually set the Router into safe mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12528) Add an option to not disable short-circuit reads on failures
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344557#comment-16344557 ] John Zhuge commented on HDFS-12528: --- +1 LGTM > Add an option to not disable short-circuit reads on failures > > > Key: HDFS-12528 > URL: https://issues.apache.org/jira/browse/HDFS-12528 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, performance >Affects Versions: 2.6.0 >Reporter: Andre Araujo >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12528.000.patch, HDFS-12528.01.patch, > HDFS-12528.02.patch, HDFS-12528.03.patch, HDFS-12528.04.patch, > HDFS-12528.05.patch > > > We have scenarios where data ingestion makes use of the -appendToFile > operation to add new data to existing HDFS files. In these situations, we're > frequently running into the problem described below. > We're using Impala to query the HDFS data with short-circuit reads (SCR) > enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce > the memory footprint. In some cases, though, Impala still keeps the HDFS file > handle open for reuse. > The "unbuffer" call, however, causes the file's current block reader to be > closed, which makes the associated ShortCircuitReplica evictable from the > ShortCircuitCache. When the cluster is under load, this means that the > ShortCircuitReplica can be purged off the cache pretty fast, which closes the > file descriptor to the underlying storage file. > That means that when Impala re-reads the file it has to re-open the storage > files associated with the ShortCircuitReplica's that were evicted from the > cache. If there were no appends to those blocks, the re-open will succeed > without problems. If one block was appended since the ShortCircuitReplica was > created, the re-open will fail with the following error: > {code} > Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 > not found > {code} > This error is handled as an "unknown response" by the BlockReaderFactory [1], > which disables short-circuit reads for 10 minutes [2] for the client. > These 10 minutes without SCR can have a big performance impact for the client > operations. In this particular case ("Meta file not found") it would suffice > to return null without disabling SCR. This particular block read would fall > back to the normal, non-short-circuited, path and other SCR requests would > continue to work as expected. > It might also be interesting to be able to control how long SCR is disabled > for in the "unknown response" case. 10 minutes seems a bit to long and not > being able to change that is a problem. > [1] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646 > [2] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13043) RBF: Expose the state of the Routers in the federation
[ https://issues.apache.org/jira/browse/HDFS-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344538#comment-16344538 ] genericqa commented on HDFS-13043: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 58s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 44s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}215m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13043 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908248/HDFS-13043.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle cc | | uname | Linux ac595c97
[jira] [Commented] (HDFS-12528) Add an option to not disable short-circuit reads on failures
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344536#comment-16344536 ] Weiwei Yang commented on HDFS-12528: Revised the Jira title a bit to describe the fix better. > Add an option to not disable short-circuit reads on failures > > > Key: HDFS-12528 > URL: https://issues.apache.org/jira/browse/HDFS-12528 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, performance >Affects Versions: 2.6.0 >Reporter: Andre Araujo >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12528.000.patch, HDFS-12528.01.patch, > HDFS-12528.02.patch, HDFS-12528.03.patch, HDFS-12528.04.patch, > HDFS-12528.05.patch > > > We have scenarios where data ingestion makes use of the -appendToFile > operation to add new data to existing HDFS files. In these situations, we're > frequently running into the problem described below. > We're using Impala to query the HDFS data with short-circuit reads (SCR) > enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce > the memory footprint. In some cases, though, Impala still keeps the HDFS file > handle open for reuse. > The "unbuffer" call, however, causes the file's current block reader to be > closed, which makes the associated ShortCircuitReplica evictable from the > ShortCircuitCache. When the cluster is under load, this means that the > ShortCircuitReplica can be purged off the cache pretty fast, which closes the > file descriptor to the underlying storage file. > That means that when Impala re-reads the file it has to re-open the storage > files associated with the ShortCircuitReplica's that were evicted from the > cache. If there were no appends to those blocks, the re-open will succeed > without problems. If one block was appended since the ShortCircuitReplica was > created, the re-open will fail with the following error: > {code} > Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 > not found > {code} > This error is handled as an "unknown response" by the BlockReaderFactory [1], > which disables short-circuit reads for 10 minutes [2] for the client. > These 10 minutes without SCR can have a big performance impact for the client > operations. In this particular case ("Meta file not found") it would suffice > to return null without disabling SCR. This particular block read would fall > back to the normal, non-short-circuited, path and other SCR requests would > continue to work as expected. > It might also be interesting to be able to control how long SCR is disabled > for in the "unknown response" case. 10 minutes seems a bit to long and not > being able to change that is a problem. > [1] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646 > [2] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12528) Add an option to not disable short-circuit reads on failures
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12528: --- Summary: Add an option to not disable short-circuit reads on failures (was: Short-circuit reads unnecessarily disabled for a long time) > Add an option to not disable short-circuit reads on failures > > > Key: HDFS-12528 > URL: https://issues.apache.org/jira/browse/HDFS-12528 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, performance >Affects Versions: 2.6.0 >Reporter: Andre Araujo >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12528.000.patch, HDFS-12528.01.patch, > HDFS-12528.02.patch, HDFS-12528.03.patch, HDFS-12528.04.patch, > HDFS-12528.05.patch > > > We have scenarios where data ingestion makes use of the -appendToFile > operation to add new data to existing HDFS files. In these situations, we're > frequently running into the problem described below. > We're using Impala to query the HDFS data with short-circuit reads (SCR) > enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce > the memory footprint. In some cases, though, Impala still keeps the HDFS file > handle open for reuse. > The "unbuffer" call, however, causes the file's current block reader to be > closed, which makes the associated ShortCircuitReplica evictable from the > ShortCircuitCache. When the cluster is under load, this means that the > ShortCircuitReplica can be purged off the cache pretty fast, which closes the > file descriptor to the underlying storage file. > That means that when Impala re-reads the file it has to re-open the storage > files associated with the ShortCircuitReplica's that were evicted from the > cache. If there were no appends to those blocks, the re-open will succeed > without problems. If one block was appended since the ShortCircuitReplica was > created, the re-open will fail with the following error: > {code} > Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 > not found > {code} > This error is handled as an "unknown response" by the BlockReaderFactory [1], > which disables short-circuit reads for 10 minutes [2] for the client. > These 10 minutes without SCR can have a big performance impact for the client > operations. In this particular case ("Meta file not found") it would suffice > to return null without disabling SCR. This particular block read would fall > back to the normal, non-short-circuited, path and other SCR requests would > continue to work as expected. > It might also be interesting to be able to control how long SCR is disabled > for in the "unknown response" case. 10 minutes seems a bit to long and not > being able to change that is a problem. > [1] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646 > [2] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12528) Short-circuit reads unnecessarily disabled for a long time
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344535#comment-16344535 ] Weiwei Yang commented on HDFS-12528: Hi [~xiaochen], +1 to the latest patch, thanks for getting it done. I will commit this tomorrow if no body objects. Thanks! > Short-circuit reads unnecessarily disabled for a long time > -- > > Key: HDFS-12528 > URL: https://issues.apache.org/jira/browse/HDFS-12528 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, performance >Affects Versions: 2.6.0 >Reporter: Andre Araujo >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12528.000.patch, HDFS-12528.01.patch, > HDFS-12528.02.patch, HDFS-12528.03.patch, HDFS-12528.04.patch, > HDFS-12528.05.patch > > > We have scenarios where data ingestion makes use of the -appendToFile > operation to add new data to existing HDFS files. In these situations, we're > frequently running into the problem described below. > We're using Impala to query the HDFS data with short-circuit reads (SCR) > enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce > the memory footprint. In some cases, though, Impala still keeps the HDFS file > handle open for reuse. > The "unbuffer" call, however, causes the file's current block reader to be > closed, which makes the associated ShortCircuitReplica evictable from the > ShortCircuitCache. When the cluster is under load, this means that the > ShortCircuitReplica can be purged off the cache pretty fast, which closes the > file descriptor to the underlying storage file. > That means that when Impala re-reads the file it has to re-open the storage > files associated with the ShortCircuitReplica's that were evicted from the > cache. If there were no appends to those blocks, the re-open will succeed > without problems. If one block was appended since the ShortCircuitReplica was > created, the re-open will fail with the following error: > {code} > Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 > not found > {code} > This error is handled as an "unknown response" by the BlockReaderFactory [1], > which disables short-circuit reads for 10 minutes [2] for the client. > These 10 minutes without SCR can have a big performance impact for the client > operations. In this particular case ("Meta file not found") it would suffice > to return null without disabling SCR. This particular block read would fall > back to the normal, non-short-circuited, path and other SCR requests would > continue to work as expected. > It might also be interesting to be able to control how long SCR is disabled > for in the "unknown response" case. 10 minutes seems a bit to long and not > being able to change that is a problem. > [1] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646 > [2] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12942) Synchronization issue in FSDataSetImpl#moveBlock
[ https://issues.apache.org/jira/browse/HDFS-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344520#comment-16344520 ] genericqa commented on HDFS-12942: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}154m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestINodeFile | | | hadoop.hdfs.server.namenode.TestNameNodeXAttr | | | hadoop.hdfs.TestDatanodeRegistration | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshot | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.namenode.TestAuditLogs | | | hadoop.hdfs.server.namenode.TestAddStripedBlocks | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.namenode.TestLargeDirectoryDelete | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport | | | hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.namenode.TestFSImage | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12
[jira] [Commented] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344519#comment-16344519 ] Bharat Viswanadham commented on HDFS-13062: --- Hi [~hanishakoneru] and [~arpitagarwal] Thanks for review. Addressed review comments in v04 patch. > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch, HDFS-13062.03.patch, HDFS-13062.04.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13062: -- Attachment: HDFS-13062.04.patch > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch, HDFS-13062.03.patch, HDFS-13062.04.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12528) Short-circuit reads unnecessarily disabled for a long time
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344511#comment-16344511 ] Xiao Chen commented on HDFS-12528: -- precommit failures are unrelated to the change. [~cheersyang] / [~jzhuge], would you mind to give a final pass? Thanks a lot > Short-circuit reads unnecessarily disabled for a long time > -- > > Key: HDFS-12528 > URL: https://issues.apache.org/jira/browse/HDFS-12528 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, performance >Affects Versions: 2.6.0 >Reporter: Andre Araujo >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12528.000.patch, HDFS-12528.01.patch, > HDFS-12528.02.patch, HDFS-12528.03.patch, HDFS-12528.04.patch, > HDFS-12528.05.patch > > > We have scenarios where data ingestion makes use of the -appendToFile > operation to add new data to existing HDFS files. In these situations, we're > frequently running into the problem described below. > We're using Impala to query the HDFS data with short-circuit reads (SCR) > enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce > the memory footprint. In some cases, though, Impala still keeps the HDFS file > handle open for reuse. > The "unbuffer" call, however, causes the file's current block reader to be > closed, which makes the associated ShortCircuitReplica evictable from the > ShortCircuitCache. When the cluster is under load, this means that the > ShortCircuitReplica can be purged off the cache pretty fast, which closes the > file descriptor to the underlying storage file. > That means that when Impala re-reads the file it has to re-open the storage > files associated with the ShortCircuitReplica's that were evicted from the > cache. If there were no appends to those blocks, the re-open will succeed > without problems. If one block was appended since the ShortCircuitReplica was > created, the re-open will fail with the following error: > {code} > Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 > not found > {code} > This error is handled as an "unknown response" by the BlockReaderFactory [1], > which disables short-circuit reads for 10 minutes [2] for the client. > These 10 minutes without SCR can have a big performance impact for the client > operations. In this particular case ("Meta file not found") it would suffice > to return null without disabling SCR. This particular block read would fall > back to the normal, non-short-circuited, path and other SCR requests would > continue to work as expected. > It might also be interesting to be able to control how long SCR is disabled > for in the "unknown response" case. 10 minutes seems a bit to long and not > being able to change that is a problem. > [1] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646 > [2] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12522) Ozone: Remove the Priority Queues used in the Container State Manager
[ https://issues.apache.org/jira/browse/HDFS-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344509#comment-16344509 ] genericqa commented on HDFS-12522: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 10s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 3s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 59s{color} | {color:red} hadoop-hdfs-project generated 12 new + 434 unchanged - 0 fixed = 446 total (was 434) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 7 new + 9 unchanged - 0 fixed = 16 total (was 9) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}185m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestErasureCodingPolicies | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestHFlush | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestAppendSnapshotTruncate | | | hadoop.hdfs.TestInjectionForSimulat
[jira] [Commented] (HDFS-13044) RBF: Add a safe mode for the Router
[ https://issues.apache.org/jira/browse/HDFS-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344502#comment-16344502 ] Hudson commented on HDFS-13044: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13579 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13579/]) HDFS-13044. RBF: Add a safe mode for the Router. Contributed by Inigo (yqlin: rev dbb9dded33b3cff3b630e98300d30515a9d1eec4) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSafemodeService.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSafeModeException.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/RouterConfigBuilder.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterSafemode.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreCacheUpdateService.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSRouterFederation.md * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java > RBF: Add a safe mode for the Router > --- > > Key: HDFS-13044 > URL: https://issues.apache.org/jira/browse/HDFS-13044 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13004.000.patch, HDFS-13044.001.patch, > HDFS-13044.002.patch, HDFS-13044.003.patch, HDFS-13044.004.patch > > > When a Router cannot communicate with the State Store, it should enter into a > safe mode that disallows certain operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12897) Path not found when we get the ec policy for a .snapshot dir
[ https://issues.apache.org/jira/browse/HDFS-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344488#comment-16344488 ] Xiao Chen commented on HDFS-12897: -- Thanks for revving [~GeLiXin], patch 5 LGTM. Will wait for a few days in case Rakesh want to take a further look. Do you want to fix this for encryption zones as well, per Rakesh's comment? > Path not found when we get the ec policy for a .snapshot dir > > > Key: HDFS-12897 > URL: https://issues.apache.org/jira/browse/HDFS-12897 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, hdfs, snapshots >Affects Versions: 3.0.0-alpha1, 3.1.0 >Reporter: Harshakiran Reddy >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-12897.001.patch, HDFS-12897.002.patch, > HDFS-12897.003.patch, HDFS-12897.004.patch, HDFS-12897.005.patch > > > Scenario:- > --- > Operation on snapshot dir. > *EC policy* > bin> ./hdfs ec -getPolicy -path /dir/ > RS-3-2-1024k > bin> ./hdfs ec -getPolicy -path /dir/.snapshot/ > {{FileNotFoundException: Path not found: /dir/.snapshot}} > bin> ./hdfs dfs -ls /dir/.snapshot/ > Found 2 items > drwxr-xr-x - user group 0 2017-12-05 12:27 /dir/.snapshot/s1 > drwxr-xr-x - user group 0 2017-12-05 12:28 /dir/.snapshot/s2 > *Storagepolicies* > bin> ./hdfs storagepolicies -getStoragePolicy -path /dir/.snapshot/ > {{The storage policy of /dir/.snapshot/ is unspecified}} > bin> ./hdfs storagepolicies -getStoragePolicy -path /dir/ > The storage policy of /dir/: > BlockStoragePolicy{COLD:2, storageTypes=[ARCHIVE], creationFallbacks=[], > replicationFallbacks=[]} > *Which is the correct behavior ?* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13044) RBF: Add a safe mode for the Router
[ https://issues.apache.org/jira/browse/HDFS-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344479#comment-16344479 ] Yiqun Lin commented on HDFS-13044: -- I only committed to trunk now, there are some conflicts when cherry-pick to branch-3.0. [~elgoiri], would you take care of this and committ this to remaining branches? > RBF: Add a safe mode for the Router > --- > > Key: HDFS-13044 > URL: https://issues.apache.org/jira/browse/HDFS-13044 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13004.000.patch, HDFS-13044.001.patch, > HDFS-13044.002.patch, HDFS-13044.003.patch, HDFS-13044.004.patch > > > When a Router cannot communicate with the State Store, it should enter into a > safe mode that disallows certain operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13044) RBF: Add a safe mode for the Router
[ https://issues.apache.org/jira/browse/HDFS-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344473#comment-16344473 ] Yiqun Lin commented on HDFS-13044: -- Since this JIRA is depended by HDFS-13068, I'd like to commit this. > RBF: Add a safe mode for the Router > --- > > Key: HDFS-13044 > URL: https://issues.apache.org/jira/browse/HDFS-13044 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13004.000.patch, HDFS-13044.001.patch, > HDFS-13044.002.patch, HDFS-13044.003.patch, HDFS-13044.004.patch > > > When a Router cannot communicate with the State Store, it should enter into a > safe mode that disallows certain operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12942) Synchronization issue in FSDataSetImpl#moveBlock
[ https://issues.apache.org/jira/browse/HDFS-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344424#comment-16344424 ] Ajay Kumar edited comment on HDFS-12942 at 1/30/18 2:50 AM: [~virajith], Updated patch to address your comments with one minor change. Instead of incrementing both dfs and number of blocks for volume new patch increments only no of blocks(as is the case right now). I am investigating if current code doesn't increment dfs used correctly but if that is the case it will be different bug altogether. (will file another jira for it) i.e {{volume.incrNumBlocks(bpid)}} instead of {{volume.incDfsUsedAndNumBlocks(bpid, newReplicaInfo.getBytesOnDisk())}} was (Author: ajayydv): [~virajith], Updated patch to address your comments with one minor change. Instead of incrementing both dfs and number of blocks for volume new patch increments only no of blocks(as is the case right now). I am investigating if current code doesn't increment dfs used correctly but if that is the case it will be different bug altogether. (will file another jira for it) {{volume.incrNumBlocks(bpid)}} {{volume.incDfsUsedAndNumBlocks(bpid, newReplicaInfo.getBytesOnDisk())}} > Synchronization issue in FSDataSetImpl#moveBlock > > > Key: HDFS-12942 > URL: https://issues.apache.org/jira/browse/HDFS-12942 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-12942.001.patch, HDFS-12942.002.patch, > HDFS-12942.003.patch, HDFS-12942.004.patch, HDFS-12942.005.patch, > HDFS-12942.006.patch > > > FSDataSetImpl#moveBlock works in following following 3 steps: > # first creates a new replicaInfo object > # calls finalizeReplica to finalize it. > # Calls removeOldReplica to remove oldReplica. > A client can potentially append to the old replica between step 1 and 2. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12942) Synchronization issue in FSDataSetImpl#moveBlock
[ https://issues.apache.org/jira/browse/HDFS-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12942: -- Attachment: HDFS-12942.006.patch > Synchronization issue in FSDataSetImpl#moveBlock > > > Key: HDFS-12942 > URL: https://issues.apache.org/jira/browse/HDFS-12942 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-12942.001.patch, HDFS-12942.002.patch, > HDFS-12942.003.patch, HDFS-12942.004.patch, HDFS-12942.005.patch, > HDFS-12942.006.patch > > > FSDataSetImpl#moveBlock works in following following 3 steps: > # first creates a new replicaInfo object > # calls finalizeReplica to finalize it. > # Calls removeOldReplica to remove oldReplica. > A client can potentially append to the old replica between step 1 and 2. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12942) Synchronization issue in FSDataSetImpl#moveBlock
[ https://issues.apache.org/jira/browse/HDFS-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344424#comment-16344424 ] Ajay Kumar commented on HDFS-12942: --- [~virajith], Updated patch to address your comments with one minor change. Instead of incrementing both dfs and number of blocks for volume new patch increments only no of blocks(as is the case right now). I am investigating if current code doesn't increment dfs used correctly but if that is the case it will be different bug altogether. (will file another jira for it) {{volume.incrNumBlocks(bpid)}} {{volume.incDfsUsedAndNumBlocks(bpid, newReplicaInfo.getBytesOnDisk())}} > Synchronization issue in FSDataSetImpl#moveBlock > > > Key: HDFS-12942 > URL: https://issues.apache.org/jira/browse/HDFS-12942 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-12942.001.patch, HDFS-12942.002.patch, > HDFS-12942.003.patch, HDFS-12942.004.patch, HDFS-12942.005.patch > > > FSDataSetImpl#moveBlock works in following following 3 steps: > # first creates a new replicaInfo object > # calls finalizeReplica to finalize it. > # Calls removeOldReplica to remove oldReplica. > A client can potentially append to the old replica between step 1 and 2. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13044) RBF: Add a safe mode for the Router
[ https://issues.apache.org/jira/browse/HDFS-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344423#comment-16344423 ] Yiqun Lin commented on HDFS-13044: -- Thanks for updating the patch, [~elgoiri]. Only one nit: {noformat} +if (delta < startupInterval) { + LOG.info("Delaying safemode exit for {} seconds...", + this.startupInterval - delta); + return; +} {noformat} {{seconds}} should be {{milliseconds}}. You can fix this while committing this. +1. > RBF: Add a safe mode for the Router > --- > > Key: HDFS-13044 > URL: https://issues.apache.org/jira/browse/HDFS-13044 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13004.000.patch, HDFS-13044.001.patch, > HDFS-13044.002.patch, HDFS-13044.003.patch, HDFS-13044.004.patch > > > When a Router cannot communicate with the State Store, it should enter into a > safe mode that disallows certain operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12654) APPEND API call is different in HTTPFS and NameNode REST
[ https://issues.apache.org/jira/browse/HDFS-12654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344419#comment-16344419 ] Masatake Iwasaki commented on HDFS-12654: - Thanks for the info. WebHDFS does not create a new file when append is requested. It returns 404. NamenodeWebHdfsMethods#chooseDataNode:: {noformat} } else if (op == GetOpParam.Op.OPEN || op == GetOpParam.Op.GETFILECHECKSUM || op == PostOpParam.Op.APPEND) { //choose a datanode containing a replica final NamenodeProtocols np = getRPCServer(namenode); final HdfsFileStatus status = np.getFileInfo(path); if (status == null) { throw new FileNotFoundException("File " + path + " not found."); } {noformat} The non-existent file seems to be crated by fluent-plugin-webhdfs. out_webhdfs.rb:: {noformat} def send_data(path, data) if @append begin @client.append(path, data) rescue WebHDFS::FileNotFoundError @client.create(path, data) end {noformat} The issue stated in the ticket is that WebHDFS returns 404 but HttpFs returns 500. I could not reproduce this. {quote} WebHDFS::ServerError means that the client (fluentd) receives HTTP response code 500 from HttpFs server. WebHDFS server returns 404 for such cases. {quote} > APPEND API call is different in HTTPFS and NameNode REST > > > Key: HDFS-12654 > URL: https://issues.apache.org/jira/browse/HDFS-12654 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, httpfs, namenode >Affects Versions: 2.6.0, 2.7.0, 2.8.0, 3.0.0-beta1 >Reporter: Andras Czesznak >Priority: Major > > The APPEND REST API call behaves differently in the NameNode REST and the > HTTPFS codes. The NameNode version creates the target file the new data being > appended to if it does not exist at the time of the call issued. The HTTPFS > version assumes the target file exists when APPEND is called and can append > only the new data but does not create the target file it doesn't exist. > The two implementations should be standardized, preferably the HTTPFS version > should be modified to execute an implicit CREATE if the target file does not > exist. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344417#comment-16344417 ] genericqa commented on HDFS-13062: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 17 unchanged - 0 fixed = 20 total (was 17) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 50s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}163m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13062 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908228/HDFS-13062.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ebb7826651d4 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / fde95d4 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/22876/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22876/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22876/testReport/ | | Max. process+thread count | 3159 (vs. ulimit of 5000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Updated] (HDFS-13043) RBF: Expose the state of the Routers in the federation
[ https://issues.apache.org/jira/browse/HDFS-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13043: --- Attachment: HDFS-13043.005.patch > RBF: Expose the state of the Routers in the federation > -- > > Key: HDFS-13043 > URL: https://issues.apache.org/jira/browse/HDFS-13043 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13043.000.patch, HDFS-13043.001.patch, > HDFS-13043.002.patch, HDFS-13043.003.patch, HDFS-13043.004.patch, > HDFS-13043.005.patch, router-info.png > > > The Router should expose the state of the other Routers in the federation > through a user UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13043) RBF: Expose the state of the Routers in the federation
[ https://issues.apache.org/jira/browse/HDFS-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344392#comment-16344392 ] genericqa commented on HDFS-13043: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 6s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 50s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 2m 5s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 2m 5s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 5s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 2m 37s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 13s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s{color} | {color:red} The patch generated 5 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 91m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ipc.TestProtoBufRpc | | | hadoop.ipc.TestCallQueueManager | | | hadoop.ipc.TestIPC | | | hadoop.ipc.TestReuseRpcConnections | | | hadoop.ha.TestZKFailoverControllerStress | | | hadoop.ipc.TestRPCCallBenchmark | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13043 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908237/HDFS-13043.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml f
[jira] [Commented] (HDFS-12522) Ozone: Remove the Priority Queues used in the Container State Manager
[ https://issues.apache.org/jira/browse/HDFS-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344386#comment-16344386 ] Anu Engineer commented on HDFS-12522: - updated the patch for test runs. > Ozone: Remove the Priority Queues used in the Container State Manager > - > > Key: HDFS-12522 > URL: https://issues.apache.org/jira/browse/HDFS-12522 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Major > Attachments: HDFS-12522-HDFS-7240.001.patch, > HDFS-12522-HDFS-7240.002.patch, HDFS-12522-HDFS-7240.003.patch, > HDFS-12522-HDFS-7240.004.patch > > > During code review of HDFS-12387, it was suggested that we remove the > priority queues that was used in ContainerStateManager. This JIRA tracks that > issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12522) Ozone: Remove the Priority Queues used in the Container State Manager
[ https://issues.apache.org/jira/browse/HDFS-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12522: Attachment: HDFS-12522-HDFS-7240.004.patch > Ozone: Remove the Priority Queues used in the Container State Manager > - > > Key: HDFS-12522 > URL: https://issues.apache.org/jira/browse/HDFS-12522 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Major > Attachments: HDFS-12522-HDFS-7240.001.patch, > HDFS-12522-HDFS-7240.002.patch, HDFS-12522-HDFS-7240.003.patch, > HDFS-12522-HDFS-7240.004.patch > > > During code review of HDFS-12387, it was suggested that we remove the > priority queues that was used in ContainerStateManager. This JIRA tracks that > issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344371#comment-16344371 ] genericqa commented on HDFS-13061: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 46s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 24s{color} | {color:green} hadoop-hdfs-project generated 0 new + 433 unchanged - 1 fixed = 433 total (was 434) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 28s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}138m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFai
[jira] [Commented] (HDFS-13079) Provide a config to start namenode in safemode state upto a certain transaction id
[ https://issues.apache.org/jira/browse/HDFS-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344356#comment-16344356 ] Chen Liang commented on HDFS-13079: --- Thanks for working on this [~msingh]! I've only quickly looked at the patch, got one question. {{loadTillTxid}} gets initialized to {{INVALID_TXID}}, so does this mean {{getLoadTillTxid}} could potentially return {{INVALID_TXID}}? I think it might be better to return a null value in {{getLoadTillTxid}} if it turns out {{loadTillTxid}} is still {{INVALID_TXID}}. For {code} lastAppliedTxId >= target.getLoadTillTxid() {code} we probably need to do some more check such as {code} target.getLoadTillTxid() != INVALID_TXID && lastAppliedTxId >= target.getLoadTillTxid() {code} Because {{INVALID_TXID}} is -12345, so it seems to me that any practical value of {{lastAppliedTxId}} will be >= target.getLoadTillTxid() if {{loadTillTxid}} is still its initial value of {{INVALID_TXID}}. > Provide a config to start namenode in safemode state upto a certain > transaction id > -- > > Key: HDFS-13079 > URL: https://issues.apache.org/jira/browse/HDFS-13079 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Attachments: HDFS-13079.001.patch > > > In some cases it necessary to rollback the Namenode back to a certain > transaction id. This is especially needed when the user issues a {{rm -Rf > -skipTrash}} by mistake. > Rolling back to a transaction id helps in taking a peek at the filesystem at > a particular instant. This jira proposes to provide a configuration variable > using which the namenode can be started upto a certain transaction id. The > filesystem will be in a readonly safemode which cannot be overridden > manually. It will only be overridden by removing the config value from the > config file. Please also note that this will not cause any changes in the > filesystem state, the filesystem will be in safemode state and no changes to > the filesystem state will be allowed. > Please note that in case a checkpoint has already happened and the requested > transaction id has been subsumed in an FSImage, then the namenode will be > started with the next nearest transaction id. Further FSImage files and edits > will be ignored. > If the checkpoint hasn't happen then the namenode will be started with the > exact transaction id. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344351#comment-16344351 ] genericqa commented on HDFS-13061: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s{color} | {color:green} hadoop-hdfs-project generated 0 new + 433 unchanged - 1 fixed = 433 total (was 434) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}184m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.TestSafeMode | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeMetrics | | | had
[jira] [Commented] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344352#comment-16344352 ] Hanisha Koneru commented on HDFS-13062: --- In {{getLogDir()}}, before adding a new dir to {{localDir}}, we should check that its not a duplicate entry. This can happen if the journal dirs for 2 namespaces are set separately but to the same location. > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch, HDFS-13062.03.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344315#comment-16344315 ] Hanisha Koneru commented on HDFS-13062: --- # In patch v03, if different journal dirs are configured, then we are not doing a {{checkDir()}} operation on those directories. We should call \{{validateAndCreateJournalDir(dir)}} when adding a dir to {{localDir}}. # In \{{validateAndCreateJournalDir()}}, the two for loops can be combined into one. {code:java} for (File journalDir : localDir) { if (!journalDir.isAbsolute()) { throw new IllegalArgumentException( "Journal dir '" + journalDir + "' should be an absolute path"); } } for (File jDir : localDir) { DiskChecker.checkDir(jDir); }{code} > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch, HDFS-13062.03.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344316#comment-16344316 ] genericqa commented on HDFS-13061: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s{color} | {color:green} hadoop-hdfs-project generated 0 new + 433 unchanged - 1 fixed = 433 total (was 434) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 57s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13061 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908210/HDFS-13061.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 97b114b0cc96 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_
[jira] [Commented] (HDFS-11187) Optimize disk access for last partial chunk checksum of Finalized replica
[ https://issues.apache.org/jira/browse/HDFS-11187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344280#comment-16344280 ] genericqa commented on HDFS-11187: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 388 unchanged - 0 fixed = 394 total (was 388) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}124m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.mover.TestStorageMover | | | hadoop.hdfs.server.mover.TestMover | | | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData | | | hadoop.hdfs.server.datanode.TestLargeBlockReport | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-11187 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908216/HDFS-11187.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d4a7166b8e80 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7fd287b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/22873/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22873/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-h
[jira] [Commented] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344271#comment-16344271 ] Arpit Agarwal commented on HDFS-13062: -- Added couple of comments to the review board. Still reviewing the unit tests. > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch, HDFS-13062.03.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13060) Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver
[ https://issues.apache.org/jira/browse/HDFS-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344269#comment-16344269 ] genericqa commented on HDFS-13060: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 3s{color} | {color:orange} root: The patch generated 15 new + 0 unchanged - 0 fixed = 15 total (was 0) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 34s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.shell.TestCopyFromLocal | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13060 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12907951/HDFS-13060.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b5fed36a5b0f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7fd287b | | maven | version: Apache Maven 3.3.9 | | Default Java |
[jira] [Updated] (HDFS-12879) Ozone : add scm init command to document.
[ https://issues.apache.org/jira/browse/HDFS-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12879: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Thank [~rahulp] for the contribution and all for reviews. I've committed the patch to the feature branch. > Ozone : add scm init command to document. > - > > Key: HDFS-12879 > URL: https://issues.apache.org/jira/browse/HDFS-12879 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Rahul Pathak >Priority: Minor > Labels: newbie > Fix For: HDFS-7240 > > Attachments: HDFS-12879-HDFS-7240.001.patch > > > When an Ozone cluster is initialized, before starting SCM through {{hdfs > --daemon start scm}}, the command {{hdfs scm -init}} needs to be called > first. But seems this command is not being documented. We should add this > note to document. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13043) RBF: Expose the state of the Routers in the federation
[ https://issues.apache.org/jira/browse/HDFS-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13043: --- Attachment: HDFS-13043.004.patch > RBF: Expose the state of the Routers in the federation > -- > > Key: HDFS-13043 > URL: https://issues.apache.org/jira/browse/HDFS-13043 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13043.000.patch, HDFS-13043.001.patch, > HDFS-13043.002.patch, HDFS-13043.003.patch, HDFS-13043.004.patch, > router-info.png > > > The Router should expose the state of the other Routers in the federation > through a user UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-12574: -- Fix Version/s: 2.10.0 > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12879) Ozone : add scm init command to document.
[ https://issues.apache.org/jira/browse/HDFS-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344248#comment-16344248 ] Xiaoyu Yao commented on HDFS-12879: --- LGTM, +1. I will commit it shortly. > Ozone : add scm init command to document. > - > > Key: HDFS-12879 > URL: https://issues.apache.org/jira/browse/HDFS-12879 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Rahul Pathak >Priority: Minor > Labels: newbie > Attachments: HDFS-12879-HDFS-7240.001.patch > > > When an Ozone cluster is initialized, before starting SCM through {{hdfs > --daemon start scm}}, the command {{hdfs scm -init}} needs to be called > first. But seems this command is not being documented. We should add this > note to document. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-12574: -- Attachment: HDFS-12574.012.branch-2.8.patch > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-12574: -- Fix Version/s: 2.8.4 > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344239#comment-16344239 ] Kihwal Lee commented on HDFS-12574: --- Committed to branch-2.8. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13060) Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver
[ https://issues.apache.org/jira/browse/HDFS-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344226#comment-16344226 ] Xiaoyu Yao commented on HDFS-13060: --- Thanks [~ajayydv] for workong on this. Patch looks good to me overall. Here are few minor issues: CombinedIPBlacklist.java Can this be a common util class like CombinedIPList which will be used for both white and black list. Line 27: NIT: unused LOCALHOST_IP BlackListBasedTrustedChannelResolver.java Line 38/44/50: the comment should note that these keys are for the server Line 60/65/70/76: the comment should note that these keys are for the client Can you open a separate ticket to support composite trusted channel resolver that supports both whitelist and blacklist? > Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver > > > Key: HDFS-13060 > URL: https://issues.apache.org/jira/browse/HDFS-13060 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13060.000.patch > > > HDFS-5910 introduces encryption negotiation between client and server based > on a customizable TrustedChannelResolver class. The TrustedChannelResolver is > invoked on both client and server side. If the resolver indicates that the > channel is trusted, then the data transfer will not be encrypted even if > dfs.encrypt.data.transfer is set to true. > The default trust channel resolver implementation returns false indicating > that the channel is not trusted, which always enables encryption. HDFS-5910 > also added a build-int whitelist based trust channel resolver. It allows you > to put IP address/Network Mask of trusted client/server in whitelist files to > skip encryption for certain traffics. > This ticket is opened to add a blacklist based trust channel resolver for > cases only certain machines (IPs) are untrusted without adding each trusted > IP individually. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-12574: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Also committed the updated patch to 2.9. Will do the same for 2.8. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.9.1, 3.0.1 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.patch, HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-12574: -- Attachment: HDFS-12574.012.branch-2.patch > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.9.1, 3.0.1 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.patch, HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-12574: -- Fix Version/s: 2.9.1 > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.9.1, 3.0.1 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344221#comment-16344221 ] Hudson commented on HDFS-12574: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13578 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13578/]) HDFS-12574. Add CryptoInputStream to WebHdfsFileSystem read call. (kihwal: rev fde95d463c3123b315b3d07cb5b7b7dc19f7cb73) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/web/resources/TestWebHdfsDataLocality.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsContentLength.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsKMSUtil.java > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344209#comment-16344209 ] Bharat Viswanadham edited comment on HDFS-13062 at 1/29/18 11:49 PM: - Hi [~hanishakoneru] and [~arpitagarwal] Thanks for review and offline discussion. Updated the patch to address review comments. was (Author: bharatviswa): Hi [~hanishakoneru] and [~arpitagarwal] Thanks for review and offline discussion. Updated the patch to address review comments. > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch, HDFS-13062.03.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344209#comment-16344209 ] Bharat Viswanadham commented on HDFS-13062: --- Hi [~hanishakoneru] and [~arpitagarwal] Thanks for review and offline discussion. Updated the patch to address review comments. > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch, HDFS-13062.03.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13062: -- Attachment: HDFS-13062.03.patch > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch, HDFS-13062.03.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344208#comment-16344208 ] genericqa commented on HDFS-12574: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 55s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 354 unchanged - 2 fixed = 356 total (was 356) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.TestRollingUpgradeRollback | | | hadoop.hdfs.TestDFSOutputStream | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12574 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908202/HDFS-12574.012.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6e81ff9cd2bf 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool |
[jira] [Commented] (HDFS-12528) Short-circuit reads unnecessarily disabled for a long time
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344202#comment-16344202 ] genericqa commented on HDFS-12528: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 124 unchanged - 1 fixed = 124 total (was 125) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 58s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}158m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestWriteReadStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestMiniDFSCluster | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDF
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344189#comment-16344189 ] Kihwal Lee commented on HDFS-12574: --- Thanks, [~xiaochen]. I just committed it to trunk and branch-3.0. I will make the same minor tweaks for other branch patches. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-12574: -- Fix Version/s: 3.0.1 3.1.0 > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 3.0.1 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344182#comment-16344182 ] genericqa commented on HDFS-13062: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 17 unchanged - 0 fixed = 21 total (was 17) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}149m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}219m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeMXBean | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestReadStripedFileWithDNFailure | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | | | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13062 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908192/HDFS-13062.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient
[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode
[ https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344176#comment-16344176 ] Daryn Sharp commented on HDFS-10285: Still working my way through it. Here's comments/questions so far. *BlockManager* Shouldn’t spsMode be volatile? Although I question why it’s here. *BPServiceActor* Is it actually sending back the moved blocks? Aren’t IBRs sufficient? *DataNode* Why isn’t this just a block transfer? How is transferring between DNs any different than across storages? *DatanodeDescriptor* Why use a synchronized linked list to offer/poll instead of BlockingQueue? *DatanodeManager* I know it’s configurable, but realistically, when would you ever want to give storage movement tasks equal footing with under-replication? Is there really a use case for not valuing durability? Adding {{getDatanodeStorageReport}} is concerning. {{getDatanodeListForReport}} is already a very bad method that should be avoided for anything but jmx – even then it’s a concern. I eliminated calls to it years ago. All it takes is a nscd/dns hiccup and you’re left holding the fsn lock for an excessive length of time. Beyond that, the response is going to be pretty large and tagging all the storage reports is not going to be cheap. {{verifyTargetDatanodeHasSpaceForScheduling}} does it really need the namesystem lock? Can’t {{DatanodeDescriptor#chooseStorage4Block}} synchronize on its {{storageMap}}? *DFSUtil* {{DFSUtil.removeOverlapBetweenStorageTypes}} and {{DFSUtil.getSPSWorkMultiplier }}. These aren’t generally useful methods so why are they in {{DFSUtil}}? Why aren’t they in the only calling class {{StoragePolicySatisfier}}? *BlockManager* Adding SPS methods to this class implies an unexpected coupling of the SPS service to the block manager. Please move them out to prove it’s not tightly coupled. *FSDirXAttrOp* Not fond of the SPS queue updates being edge triggered by xattr changes. Food for thought: why add more rpc methods if the client can just twiddle the xattr? *FSNamesystem / NamenodeProtocolTranslatorPB* Most of the api changes appear unnecessary. No need for the new {{getFileInfo}} when the existing {{getLocatedFileInfo}} appears to do exactly what the new method does? No need for the new {{getFilePath}}. It’s only used by {{IntraSPSNameNodeContext#getFileInfo}} to get the path for an inode number, followed by calling the new {{getFileInfo}} cited above. {{IntraSPSNameNodeContext#getFileInfo}} swallows all IOEs, based on assumption that any and all IOEs means FNF which probably isn’t the intention during rpc exceptions. Should be able to replace it all with {{getLocatedFileInfo(“/.reserved/.inodes/XXX”, false)}} which avoids changing the public apis. *HdfsServerConstants* The xattr is called {{user.hdfs.sps.xattr}}. Why does the xattr name actually contain the word “xattr”? *NameNode* Super trivial but using the plural pronoun “we” in this exception message is odd. Changing the value isn’t a joint activity. :) bq. For enabling or disabling storage policy satisfier, we must pass either none/internal/external string value only *StoragePolicySatisfier* It appears to make back-to-back calls to {{hasLowRedundancyBlocks}} and {{getFileInfo}} for every file. Haven’t fully groked the code, but if low redundancy is not the common case, then it shouldn’t be called unless/until needed. It looks like files that are under replicated are re-queued again? Appears to be calling {{getStoragePolicy}} for every file. It’s not like the byte values change all the time, why call and cache all of them via {{FSN#getStoragePolicies}}? Appears to be calling {{getLiveDatanodeStorageReport}} for every file. As mentioned earlier, this is NOT cheap. The SPS should be able to operate on a fuzzy/cached state of the world. Then it gets another datanode report to determine the number of live nodes to decide if it should sleep before processing the next path. The number of nodes from the prior cached view of the world should suffice. > Storage Policy Satisfier in Namenode > > > Key: HDFS-10285 > URL: https://issues.apache.org/jira/browse/HDFS-10285 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Attachments: HDFS-10285-consolidated-merge-patch-00.patch, > HDFS-10285-consolidated-merge-patch-01.patch, > HDFS-10285-consolidated-merge-patch-02.patch, > HDFS-10285-consolidated-merge-patch-03.patch, > HDFS-10285-consolidated-merge-patch-04.patch, > HDFS-10285-consolidated-merge-patch-05.patch, > HDFS-SPS-TestReport-20170708.pdf, SPS Modularization.pdf, > Storage-Polic
[jira] [Commented] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344174#comment-16344174 ] Ajay Kumar commented on HDFS-13061: --- Updated the debug message in patch v2. > SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted > channel > - > > Key: HDFS-13061 > URL: https://issues.apache.org/jira/browse/HDFS-13061 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13061.000.patch, HDFS-13061.001.patch, > HDFS-13061.002.patch > > > HDFS-5910 introduces encryption negotiation between client and server based > on a customizable TrustedChannelResolver class. The TrustedChannelResolver is > invoked on both client and server side. If the resolver indicates that the > channel is trusted, then the data transfer will not be encrypted even if > dfs.encrypt.data.transfer is set to true. > SaslDataTransferClient#checkTrustAndSend ask the channel resolve whether the > client and server address are trusted, respectively. It decides the channel > is untrusted only if both client and server are not trusted to enforce > encryption. *This ticket is opened to change it to not trust (and encrypt) if > either client or server address are not trusted.* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13061: -- Attachment: HDFS-13061.002.patch > SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted > channel > - > > Key: HDFS-13061 > URL: https://issues.apache.org/jira/browse/HDFS-13061 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13061.000.patch, HDFS-13061.001.patch, > HDFS-13061.002.patch > > > HDFS-5910 introduces encryption negotiation between client and server based > on a customizable TrustedChannelResolver class. The TrustedChannelResolver is > invoked on both client and server side. If the resolver indicates that the > channel is trusted, then the data transfer will not be encrypted even if > dfs.encrypt.data.transfer is set to true. > SaslDataTransferClient#checkTrustAndSend ask the channel resolve whether the > client and server address are trusted, respectively. It decides the channel > is untrusted only if both client and server are not trusted to enforce > encryption. *This ticket is opened to change it to not trust (and encrypt) if > either client or server address are not trusted.* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode
[ https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344153#comment-16344153 ] genericqa commented on HDFS-10285: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 27 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 57s{color} | {color:orange} hadoop-hdfs-project: The patch generated 19 new + 2093 unchanged - 3 fixed = 2112 total (was 2096) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 23s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 15s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}132m 37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}190m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop
[jira] [Commented] (HDFS-13043) RBF: Expose the state of the Routers in the federation
[ https://issues.apache.org/jira/browse/HDFS-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344142#comment-16344142 ] genericqa commented on HDFS-13043: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 46s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 26s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13043 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908206/HDFS-13043.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 6b5342ed24f4 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7fd287b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/22870/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/22870/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt | | cc | https://builds.apache.org/job/PreCommit-HDFS-Build/22870/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt | | javac | https://builds.apache.org/jo
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344114#comment-16344114 ] Xiao Chen commented on HDFS-12574: -- Thank you [~kihwal]! +1 on patch 12 > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11187) Optimize disk access for last partial chunk checksum of Finalized replica
[ https://issues.apache.org/jira/browse/HDFS-11187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-11187: --- Attachment: HDFS-11187.004.patch > Optimize disk access for last partial chunk checksum of Finalized replica > - > > Key: HDFS-11187 > URL: https://issues.apache.org/jira/browse/HDFS-11187 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HDFS-11187.001.patch, HDFS-11187.002.patch, > HDFS-11187.003.patch, HDFS-11187.004.patch > > > The patch at HDFS-11160 ensures BlockSender reads the correct version of > metafile when there are concurrent writers. > However, the implementation is not optimal, because it must always read the > last partial chunk checksum from disk while holding FsDatasetImpl lock for > every reader. It is possible to optimize this by keeping an up-to-date > version of last partial checksum in-memory and reduce disk access. > I am separating the optimization into a new jira, because maintaining the > state of in-memory checksum requires a lot more work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11187) Optimize disk access for last partial chunk checksum of Finalized replica
[ https://issues.apache.org/jira/browse/HDFS-11187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344113#comment-16344113 ] Wei-Chiu Chuang commented on HDFS-11187: Posted v004 patch to address compilation problem. > Optimize disk access for last partial chunk checksum of Finalized replica > - > > Key: HDFS-11187 > URL: https://issues.apache.org/jira/browse/HDFS-11187 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HDFS-11187.001.patch, HDFS-11187.002.patch, > HDFS-11187.003.patch, HDFS-11187.004.patch > > > The patch at HDFS-11160 ensures BlockSender reads the correct version of > metafile when there are concurrent writers. > However, the implementation is not optimal, because it must always read the > last partial chunk checksum from disk while holding FsDatasetImpl lock for > every reader. It is possible to optimize this by keeping an up-to-date > version of last partial checksum in-memory and reduce disk access. > I am separating the optimization into a new jira, because maintaining the > state of in-memory checksum requires a lot more work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13080) Ozone: Make finalhash in ContainerInfo of StorageContainerDatanodeProtocol.proto optional
[ https://issues.apache.org/jira/browse/HDFS-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344100#comment-16344100 ] genericqa commented on HDFS-13080: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 20s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 33m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}151m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 28s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}220m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.ozone.web.client.TestKeysRatis | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b | | JIRA Issue | HDFS-13080 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908182/HDFS-13080-HDFS-7240.000.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 099f8e1b391f 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / d069734 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22861/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22861/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/22861/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 2879 (vs. ulimit of 5000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/22861/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Make finalhash in ContainerInfo of > StorageContainerDatanodeProtocol.proto optional > ---
[jira] [Commented] (HDFS-11187) Optimize disk access for last partial chunk checksum of Finalized replica
[ https://issues.apache.org/jira/browse/HDFS-11187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344070#comment-16344070 ] Wei-Chiu Chuang commented on HDFS-11187: sorry the code does not compile against the latest trunk. Fixing that. > Optimize disk access for last partial chunk checksum of Finalized replica > - > > Key: HDFS-11187 > URL: https://issues.apache.org/jira/browse/HDFS-11187 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HDFS-11187.001.patch, HDFS-11187.002.patch, > HDFS-11187.003.patch > > > The patch at HDFS-11160 ensures BlockSender reads the correct version of > metafile when there are concurrent writers. > However, the implementation is not optimal, because it must always read the > last partial chunk checksum from disk while holding FsDatasetImpl lock for > every reader. It is possible to optimize this by keeping an up-to-date > version of last partial checksum in-memory and reduce disk access. > I am separating the optimization into a new jira, because maintaining the > state of in-memory checksum requires a lot more work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13061: -- Attachment: HDFS-13061.001.patch > SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted > channel > - > > Key: HDFS-13061 > URL: https://issues.apache.org/jira/browse/HDFS-13061 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13061.000.patch, HDFS-13061.001.patch > > > HDFS-5910 introduces encryption negotiation between client and server based > on a customizable TrustedChannelResolver class. The TrustedChannelResolver is > invoked on both client and server side. If the resolver indicates that the > channel is trusted, then the data transfer will not be encrypted even if > dfs.encrypt.data.transfer is set to true. > SaslDataTransferClient#checkTrustAndSend ask the channel resolve whether the > client and server address are trusted, respectively. It decides the channel > is untrusted only if both client and server are not trusted to enforce > encryption. *This ticket is opened to change it to not trust (and encrypt) if > either client or server address are not trusted.* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13061: -- Attachment: (was: HDFS-13061.001.patch) > SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted > channel > - > > Key: HDFS-13061 > URL: https://issues.apache.org/jira/browse/HDFS-13061 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13061.000.patch > > > HDFS-5910 introduces encryption negotiation between client and server based > on a customizable TrustedChannelResolver class. The TrustedChannelResolver is > invoked on both client and server side. If the resolver indicates that the > channel is trusted, then the data transfer will not be encrypted even if > dfs.encrypt.data.transfer is set to true. > SaslDataTransferClient#checkTrustAndSend ask the channel resolve whether the > client and server address are trusted, respectively. It decides the channel > is untrusted only if both client and server are not trusted to enforce > encryption. *This ticket is opened to change it to not trust (and encrypt) if > either client or server address are not trusted.* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344058#comment-16344058 ] Ajay Kumar commented on HDFS-13061: --- [~xyao], Updated patch with suggestions.Also addressed checkstyle issue from Jenkins build. > SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted > channel > - > > Key: HDFS-13061 > URL: https://issues.apache.org/jira/browse/HDFS-13061 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13061.000.patch, HDFS-13061.001.patch > > > HDFS-5910 introduces encryption negotiation between client and server based > on a customizable TrustedChannelResolver class. The TrustedChannelResolver is > invoked on both client and server side. If the resolver indicates that the > channel is trusted, then the data transfer will not be encrypted even if > dfs.encrypt.data.transfer is set to true. > SaslDataTransferClient#checkTrustAndSend ask the channel resolve whether the > client and server address are trusted, respectively. It decides the channel > is untrusted only if both client and server are not trusted to enforce > encryption. *This ticket is opened to change it to not trust (and encrypt) if > either client or server address are not trusted.* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13061: -- Attachment: HDFS-13061.001.patch > SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted > channel > - > > Key: HDFS-13061 > URL: https://issues.apache.org/jira/browse/HDFS-13061 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13061.000.patch, HDFS-13061.001.patch > > > HDFS-5910 introduces encryption negotiation between client and server based > on a customizable TrustedChannelResolver class. The TrustedChannelResolver is > invoked on both client and server side. If the resolver indicates that the > channel is trusted, then the data transfer will not be encrypted even if > dfs.encrypt.data.transfer is set to true. > SaslDataTransferClient#checkTrustAndSend ask the channel resolve whether the > client and server address are trusted, respectively. It decides the channel > is untrusted only if both client and server are not trusted to enforce > encryption. *This ticket is opened to change it to not trust (and encrypt) if > either client or server address are not trusted.* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12997) Move logging to slf4j in BlockPoolSliceStorage and Storage
[ https://issues.apache.org/jira/browse/HDFS-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344057#comment-16344057 ] genericqa commented on HDFS-12997: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 392 unchanged - 2 fixed = 392 total (was 394) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 152 unchanged - 7 fixed = 153 total (was 159) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}165m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12997 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908188/HDFS-12997.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cae7d8d7dea5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7fd287b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/22863/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://b
[jira] [Commented] (HDFS-13076) [SPS]: Merge work for HDFS-10285
[ https://issues.apache.org/jira/browse/HDFS-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344047#comment-16344047 ] genericqa commented on HDFS-13076: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 27 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 16s{color} | {color:orange} hadoop-hdfs-project: The patch generated 16 new + 2093 unchanged - 3 fixed = 2109 total (was 2096) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 22s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 14s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}158m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.
[jira] [Updated] (HDFS-13043) RBF: Expose the state of the Routers in the federation
[ https://issues.apache.org/jira/browse/HDFS-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13043: --- Attachment: HDFS-13043.003.patch > RBF: Expose the state of the Routers in the federation > -- > > Key: HDFS-13043 > URL: https://issues.apache.org/jira/browse/HDFS-13043 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13043.000.patch, HDFS-13043.001.patch, > HDFS-13043.002.patch, HDFS-13043.003.patch, router-info.png > > > The Router should expose the state of the other Routers in the federation > through a user UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13053) Track time to process packet in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344044#comment-16344044 ] Íñigo Goiri commented on HDFS-13053: [~hanishakoneru] we can make this backward compatible. Repeated can be optional, the approach in [^HDFS-13053.000.patch] should be backwards compatible: * If an old client gets a message with the times, that part is ignored; protobuf supports this (I've tested this and I believe [~asuresh] mentioned this some time ago). * If a new client gets a message without the times, the code already checks that. > Track time to process packet in Datanode > > > Key: HDFS-13053 > URL: https://issues.apache.org/jira/browse/HDFS-13053 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Íñigo Goiri >Assignee: Pulkit Misra >Priority: Minor > Attachments: HDFS-13053.000.patch > > > We should track the time that each datanode takes to process a packet. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13082) cookieverf mismatch error over NFS gateway on Linux
Dan Moraru created HDFS-13082: - Summary: cookieverf mismatch error over NFS gateway on Linux Key: HDFS-13082 URL: https://issues.apache.org/jira/browse/HDFS-13082 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 2.7.3 Reporter: Dan Moraru Running 'ls' on some directories over an HDFS-NFS gateway sometimes fails to list the contents of those directories. Running 'ls' on those same directories mounted via FUSE works. The NFS gateway logs errors like the following: 2018-01-29 11:53:01,130 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: cookieverf mismatch. request cookieverf: 1513390944415 dir cookieverf: 1516920857335 Reviewing hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java suggested that these errors can be avoided by setting nfs.aix.compatibility.mode.enabled=true, and that is indeed the case. The documentation lists https://issues.apache.org/jira/browse/HDFS-6549 as a known issue, but also goes on to say that "regular, non-AIX clients should NOT enable AIX compatibility mode. The work-arounds implemented by AIX compatibility mode effectively disable safeguards to ensure that listing of directory contents via NFS returns consistent results, and that all data sent to the NFS server can be assured to have been committed." Server and client is this case are one and the same, running Scientific Linux 7.4. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13044) RBF: Add a safe mode for the Router
[ https://issues.apache.org/jira/browse/HDFS-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344013#comment-16344013 ] genericqa commented on HDFS-13044: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}165m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestErasureCodingPolicies | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDistributedFileSystem | | | hadoop.hdfs.TestLease | | | hadoop.hdfs.TestHdfsAdmin | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestEncryptionZonesWithKMS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | \\ \\ || Sub
[jira] [Commented] (HDFS-13077) [SPS]: Fix review comments of external storage policy satisfier
[ https://issues.apache.org/jira/browse/HDFS-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344002#comment-16344002 ] genericqa commented on HDFS-13077: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 29s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 776 unchanged - 4 fixed = 776 total (was 780) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13077 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908177/HDFS-13077-HDFS-10285-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bf23dcc2275d 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / 3b0deb6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22859/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22859/testReport/ | | Max. process+thread count | 3864 (vs. ulimit of 5000) | | m
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343985#comment-16343985 ] Kihwal Lee commented on HDFS-12574: --- Since Rushabh is on vacation and the review comments are minor, I've updated the patch. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12528) Short-circuit reads unnecessarily disabled for a long time
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343987#comment-16343987 ] Xiao Chen commented on HDFS-12528: -- [~GeLiXin], thanks for detailed review. Sure, patch 5 a local var for better readability. bq. #3 Sure, feel free to open a new jira and work on it. Thanks! > Short-circuit reads unnecessarily disabled for a long time > -- > > Key: HDFS-12528 > URL: https://issues.apache.org/jira/browse/HDFS-12528 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, performance >Affects Versions: 2.6.0 >Reporter: Andre Araujo >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12528.000.patch, HDFS-12528.01.patch, > HDFS-12528.02.patch, HDFS-12528.03.patch, HDFS-12528.04.patch, > HDFS-12528.05.patch > > > We have scenarios where data ingestion makes use of the -appendToFile > operation to add new data to existing HDFS files. In these situations, we're > frequently running into the problem described below. > We're using Impala to query the HDFS data with short-circuit reads (SCR) > enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce > the memory footprint. In some cases, though, Impala still keeps the HDFS file > handle open for reuse. > The "unbuffer" call, however, causes the file's current block reader to be > closed, which makes the associated ShortCircuitReplica evictable from the > ShortCircuitCache. When the cluster is under load, this means that the > ShortCircuitReplica can be purged off the cache pretty fast, which closes the > file descriptor to the underlying storage file. > That means that when Impala re-reads the file it has to re-open the storage > files associated with the ShortCircuitReplica's that were evicted from the > cache. If there were no appends to those blocks, the re-open will succeed > without problems. If one block was appended since the ShortCircuitReplica was > created, the re-open will fail with the following error: > {code} > Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 > not found > {code} > This error is handled as an "unknown response" by the BlockReaderFactory [1], > which disables short-circuit reads for 10 minutes [2] for the client. > These 10 minutes without SCR can have a big performance impact for the client > operations. In this particular case ("Meta file not found") it would suffice > to return null without disabling SCR. This particular block read would fall > back to the normal, non-short-circuited, path and other SCR requests would > continue to work as expected. > It might also be interesting to be able to control how long SCR is disabled > for in the "unknown response" case. 10 minutes seems a bit to long and not > being able to change that is a problem. > [1] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646 > [2] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12879) Ozone : add scm init command to document.
[ https://issues.apache.org/jira/browse/HDFS-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343986#comment-16343986 ] genericqa commented on HDFS-12879: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 37m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b | | JIRA Issue | HDFS-12879 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908191/HDFS-12879-HDFS-7240.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 258512a3e166 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / d069734 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 302 (vs. ulimit of 5000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/22864/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone : add scm init command to document. > - > > Key: HDFS-12879 > URL: https://issues.apache.org/jira/browse/HDFS-12879 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Rahul Pathak >Priority: Minor > Labels: newbie > Attachments: HDFS-12879-HDFS-7240.001.patch > > > When an Ozone cluster is initialized, before starting SCM through {{hdfs > --daemon start scm}}, the command {{hdfs scm -init}} needs to be called > first. But seems this command is not being documented. We should add this > note to document. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12528) Short-circuit reads unnecessarily disabled for a long time
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12528: - Attachment: HDFS-12528.05.patch > Short-circuit reads unnecessarily disabled for a long time > -- > > Key: HDFS-12528 > URL: https://issues.apache.org/jira/browse/HDFS-12528 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, performance >Affects Versions: 2.6.0 >Reporter: Andre Araujo >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12528.000.patch, HDFS-12528.01.patch, > HDFS-12528.02.patch, HDFS-12528.03.patch, HDFS-12528.04.patch, > HDFS-12528.05.patch > > > We have scenarios where data ingestion makes use of the -appendToFile > operation to add new data to existing HDFS files. In these situations, we're > frequently running into the problem described below. > We're using Impala to query the HDFS data with short-circuit reads (SCR) > enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce > the memory footprint. In some cases, though, Impala still keeps the HDFS file > handle open for reuse. > The "unbuffer" call, however, causes the file's current block reader to be > closed, which makes the associated ShortCircuitReplica evictable from the > ShortCircuitCache. When the cluster is under load, this means that the > ShortCircuitReplica can be purged off the cache pretty fast, which closes the > file descriptor to the underlying storage file. > That means that when Impala re-reads the file it has to re-open the storage > files associated with the ShortCircuitReplica's that were evicted from the > cache. If there were no appends to those blocks, the re-open will succeed > without problems. If one block was appended since the ShortCircuitReplica was > created, the re-open will fail with the following error: > {code} > Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 > not found > {code} > This error is handled as an "unknown response" by the BlockReaderFactory [1], > which disables short-circuit reads for 10 minutes [2] for the client. > These 10 minutes without SCR can have a big performance impact for the client > operations. In this particular case ("Meta file not found") it would suffice > to return null without disabling SCR. This particular block read would fall > back to the normal, non-short-circuited, path and other SCR requests would > continue to work as expected. > It might also be interesting to be able to control how long SCR is disabled > for in the "unknown response" case. 10 minutes seems a bit to long and not > being able to change that is a problem. > [1] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646 > [2] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-12574: -- Attachment: HDFS-12574.012.patch > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13079) Provide a config to start namenode in safemode state upto a certain transaction id
[ https://issues.apache.org/jira/browse/HDFS-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343967#comment-16343967 ] genericqa commented on HDFS-13079: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 722 unchanged - 0 fixed = 728 total (was 722) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 46s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens | | | hadoop.hdfs.TestBlockStoragePolicy | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.TestFileLengthOnClusterRestart | | | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.server.namenode.TestStartup | | | hadoop.hdfs.server.namenode.TestNestedEncryptionZones | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy | | | hadoop.hdfs.server.namenode.TestFSImageWithXAttr | | | hadoop.hdfs.TestDFSClientFailover | | | hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits | | | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy | | | hadoop.hdfs.server.namenode.TestNameNodeRecovery | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.namenode.TestFSImageWithAcl | | | hadoop.hdfs.serv
[jira] [Commented] (HDFS-13060) Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver
[ https://issues.apache.org/jira/browse/HDFS-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343952#comment-16343952 ] genericqa commented on HDFS-13060: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 22s{color} | {color:red} root in trunk failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 23s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} hadoop-common in trunk failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 1m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common in trunk failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in trunk failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common in trunk failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in trunk failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 23s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 23s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 23s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 0m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue} 0m 21s{color} | {color:blue} ASF License check generated
[jira] [Created] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption
Xiaoyu Yao created HDFS-13081: - Summary: Datanode#checkSecureConfig should check HTTPS and SASL encryption Key: HDFS-13081 URL: https://issues.apache.org/jira/browse/HDFS-13081 Project: Hadoop HDFS Issue Type: Bug Components: datanode, security Affects Versions: 3.0.0 Reporter: Xiaoyu Yao Assignee: Ajay Kumar Datanode#checkSecureConfig currently check the following to determine if secure datanode is enabled. # The server has bound to privileged ports for RPC and HTTP via SecureDataNodeStarter. # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain HTTP) for the HTTP server. The SASL handshake guarantees authentication of the RPC server before a client transmits a secret, such as a block access token. Similarly, SSL guarantees authentication of the HTTP server before a client transmits a secret, such as a delegation token. For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server will be encrypted. However, the logic to check only if SASL property resolver is configured does not mean server requires an encrypted RPC. This ticket is open to further check and ensure datanode SASL property resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP (Quality of Protection) negotiation may drop RPC protection level from auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which should be fine by design. cc: [~cnauroth] , [~jnpandey] for additional feedback. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13053) Track time to process packet in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343908#comment-16343908 ] Hanisha Koneru edited comment on HDFS-13053 at 1/29/18 8:01 PM: -Also, adding a new required key in {{datatransfer.proto}} will not be backwards compatible.- Sorry, I misread {{repeated}} as {{required}}. But still not sure about backwards compatibility. was (Author: hanishakoneru): -Also, adding a new required key in {{datatransfer.proto}} will not be backwards compatible.- Sorry, I misread {{repeated}} as {{required}}. > Track time to process packet in Datanode > > > Key: HDFS-13053 > URL: https://issues.apache.org/jira/browse/HDFS-13053 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Íñigo Goiri >Assignee: Pulkit Misra >Priority: Minor > Attachments: HDFS-13053.000.patch > > > We should track the time that each datanode takes to process a packet. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13053) Track time to process packet in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343908#comment-16343908 ] Hanisha Koneru edited comment on HDFS-13053 at 1/29/18 7:57 PM: -Also, adding a new required key in {{datatransfer.proto}} will not be backwards compatible.- Sorry, I misread {{repeated}} as {{required}}. was (Author: hanishakoneru): Also, adding a new required key in {{datatransfer.proto}} will not be backwards compatible. > Track time to process packet in Datanode > > > Key: HDFS-13053 > URL: https://issues.apache.org/jira/browse/HDFS-13053 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Íñigo Goiri >Assignee: Pulkit Misra >Priority: Minor > Attachments: HDFS-13053.000.patch > > > We should track the time that each datanode takes to process a packet. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13053) Track time to process packet in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343908#comment-16343908 ] Hanisha Koneru commented on HDFS-13053: --- Also, adding a new required key in {{datatransfer.proto}} will not be backwards compatible. > Track time to process packet in Datanode > > > Key: HDFS-13053 > URL: https://issues.apache.org/jira/browse/HDFS-13053 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Íñigo Goiri >Assignee: Pulkit Misra >Priority: Minor > Attachments: HDFS-13053.000.patch > > > We should track the time that each datanode takes to process a packet. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel
[ https://issues.apache.org/jira/browse/HDFS-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343903#comment-16343903 ] Xiaoyu Yao commented on HDFS-13061: --- Thanks [~ajayydv] for working on this. The patch looks good to me overall. Here are few minor issues: *SaslDataTransferClient.java* Line 209: Can we move the LOG.debug before line 206 and define two variables like below to minimize the logging overhead. {code:java} boolean localTrusted = … boolean remtoeTrusted = … LOG.debug(...) if (...) {code} *TestSaslDataTransfer.java* Can we add two more test cases with test resolvers that return 1. False for both localTrusted/remoteTrusted 2. True for both localTrusted/remoteTrusted > SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted > channel > - > > Key: HDFS-13061 > URL: https://issues.apache.org/jira/browse/HDFS-13061 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13061.000.patch > > > HDFS-5910 introduces encryption negotiation between client and server based > on a customizable TrustedChannelResolver class. The TrustedChannelResolver is > invoked on both client and server side. If the resolver indicates that the > channel is trusted, then the data transfer will not be encrypted even if > dfs.encrypt.data.transfer is set to true. > SaslDataTransferClient#checkTrustAndSend ask the channel resolve whether the > client and server address are trusted, respectively. It decides the channel > is untrusted only if both client and server are not trusted to enforce > encryption. *This ticket is opened to change it to not trust (and encrypt) if > either client or server address are not trusted.* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343896#comment-16343896 ] Hanisha Koneru commented on HDFS-13062: --- {quote}Maintaining all directories in the localDir is as when JournalNode is started, it will not be knowing which nameservice this journal node belongs to, until we get some request from namenode like format eg., That is the reason for maintaining all directories in localDir in Federated setup. {quote} Yes, but once the journals are set up and we have the information about which dirs belong to the current JN, we should update the localDirs. I would suggest maintaining two lists - one with all the possible journal dirs from the config file and one with the actual dirs corresponding to the journal. {quote}Regarding JournalStatus, as we are not creating the directory for the journal, if it is not corresponding to this journalnode, so even if we iterate through all localDir's, it will not display journalStatus of other JN. {quote} Even though the JN will not create the dir (in the example above, JN is {{jn1}} and the dir is {{/disk2/jn}}), the dir might have been created manually or by some other process. And in that case, its status will be wrongly displayed in the jmx. > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13043) RBF: Expose the state of the Routers in the federation
[ https://issues.apache.org/jira/browse/HDFS-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343848#comment-16343848 ] genericqa commented on HDFS-13043: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 46s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13043 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908184/HDFS-13043.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 80b547bc1bb0 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7fd287b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/22860/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/22860/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/22860/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/22860/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt | | findbugs | https://builds.apache.
[jira] [Updated] (HDFS-13060) Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver
[ https://issues.apache.org/jira/browse/HDFS-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13060: -- Status: Patch Available (was: In Progress) > Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver > > > Key: HDFS-13060 > URL: https://issues.apache.org/jira/browse/HDFS-13060 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13060.000.patch > > > HDFS-5910 introduces encryption negotiation between client and server based > on a customizable TrustedChannelResolver class. The TrustedChannelResolver is > invoked on both client and server side. If the resolver indicates that the > channel is trusted, then the data transfer will not be encrypted even if > dfs.encrypt.data.transfer is set to true. > The default trust channel resolver implementation returns false indicating > that the channel is not trusted, which always enables encryption. HDFS-5910 > also added a build-int whitelist based trust channel resolver. It allows you > to put IP address/Network Mask of trusted client/server in whitelist files to > skip encryption for certain traffics. > This ticket is opened to add a blacklist based trust channel resolver for > cases only certain machines (IPs) are untrusted without adding each trusted > IP individually. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13062) Provide support for JN to use separate journal disk per namespace
[ https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13062: -- Attachment: HDFS-13062.02.patch > Provide support for JN to use separate journal disk per namespace > - > > Key: HDFS-13062 > URL: https://issues.apache.org/jira/browse/HDFS-13062 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, > HDFS-13062.02.patch > > > In Federated HA setup, provide support for separate journal disk for each > namespace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org