[jira] [Commented] (HADOOP-16026) Replace incorrect use of system property user.name
[ https://issues.apache.org/jira/browse/HADOOP-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773698#comment-16773698 ] Dinesh Chitlangia commented on HADOOP-16026: [~jojochuang] Sure, I will follow the approach suggested in HDFS-14176. Once you review/commit that patch, I will make the changes for this one. Thanks! > Replace incorrect use of system property user.name > -- > > Key: HADOOP-16026 > URL: https://issues.apache.org/jira/browse/HADOOP-16026 > Project: Hadoop Common > Issue Type: Improvement > Environment: Kerberized >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Attachments: HADOOP-16026.01.patch > > > This jira has been created to track the suggested changes for Hadoop Common > as identified in HDFS-14176 > Following occurrence need to be corrected: > Common/FileSystem L2233 > Common/AbstractFileSystem L451 > Common/KMSWebApp L91 > Common/SFTPConnectionPool L146 > Common/SshFenceByTcpPort L239 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16026) Replace incorrect use of system property user.name
[ https://issues.apache.org/jira/browse/HADOOP-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HADOOP-16026: --- Status: Open (was: Patch Available) > Replace incorrect use of system property user.name > -- > > Key: HADOOP-16026 > URL: https://issues.apache.org/jira/browse/HADOOP-16026 > Project: Hadoop Common > Issue Type: Improvement > Environment: Kerberized >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Attachments: HADOOP-16026.01.patch > > > This jira has been created to track the suggested changes for Hadoop Common > as identified in HDFS-14176 > Following occurrence need to be corrected: > Common/FileSystem L2233 > Common/AbstractFileSystem L451 > Common/KMSWebApp L91 > Common/SFTPConnectionPool L146 > Common/SshFenceByTcpPort L239 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus
[ https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773671#comment-16773671 ] Hadoop QA commented on HADOOP-15920: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-3.2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 7s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 15s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 45s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 0s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} branch-3.2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 3s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 53s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 0 fixed = 13 total (was 10) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 33s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 42s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}141m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be | | JIRA Issue | HADOOP-15920 | | GITHUB PR | https://github.com/apache/hadoop/pull/433 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 30fa865f3df9 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.2 / ae8839e | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15956/artif
[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus
[ https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773665#comment-16773665 ] Hadoop QA commented on HADOOP-15920: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-3.2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 56s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 51s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 56s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} branch-3.2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 53s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 0 fixed = 13 total (was 10) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 24s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 34s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be | | JIRA Issue | HADOOP-15920 | | GITHUB PR | https://github.com/apache/hadoop/pull/433 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bfb5e424c863 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.2 / ae8839e | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15957/artifact/out/
[jira] [Commented] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal
[ https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773647#comment-16773647 ] Eric Yang commented on HADOOP-16122: {quote}`loginUserFromKeytabAndReturnUGI()` which allows users to create multiple UGI and we can login with multiple keytab at the same process{quote} The purpose of loginUserFromKeytabAndReturnUGI is designed for long running process to login new kerberos session when max life time has been reached. It is not intended to login with multiple keytabs to obtain ugi of other users. {quote}I hope we can fix this if it is identified as bug even though it is not the best and secured solution for all users.{quote} This is not a bug and it is working as designed. > Re-login from keytab for multiple UGI will use the same and incorrect > keytabPrincipal > - > > Key: HADOOP-16122 > URL: https://issues.apache.org/jira/browse/HADOOP-16122 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: chendihao >Priority: Major > > In our scenario, we have a service to allow multiple users to access HDFS > with their keytab. The users use different Hadoop user and permission to > access the HDFS files. This service will run with multi-threads and create > independent UGI object for each user and use its own UGI to create Hadoop > FileSystem object to read/write HDFS. > > Since we have multiple Hadoop users in the same process, we have to use > `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The > `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. > Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` > before the kerberos ticket expires. > > The issue is that `reloginFromKeytab` will always re-login with the same and > incorrect keytab instead of the one from the expected UGI object. Because of > this issue, we can only support multiple Hadoop users to login with their own > keytabs at the first time but not re-login when the tickets expire. The logic > of login and re-login is slightly different especially for updating the > global static properties and it may be the bug of the implementation of that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus
[ https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lqjacklee updated HADOOP-15920: --- Attachment: HADOOP-15920-07.patch > get patch for S3a nextReadPos(), through Yetus > -- > > Key: HADOOP-15920 > URL: https://issues.apache.org/jira/browse/HADOOP-15920 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.1 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Major > Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, > HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, > HADOOP-15920-06.patch, HADOOP-15920-07.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal
[ https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773617#comment-16773617 ] chendihao edited comment on HADOOP-16122 at 2/21/19 2:58 AM: - Thanks [~eyang] for the suggestion. I have to declare that the client side JVM will not get the keytab in our scenario and all the logic of HDFS operation are implemented in server side which is different form Oozie. I agree that using multi-keytab is not to best solution which requires clients to submit the auth files. But in the real world, we are not allowed to edit the core-site.xml to add the ProxyUser even though it is the one-off operation. Back to this issue, now Hadoop provides `loginUserFromKeytabAndReturnUGI()` which allows users to create multiple UGI and we can login with multiple keytab at the same process. The problem is in `reloginFromKeytab()` which uses the incorrect keytabPrincipal. I think we can fix that by using the correct keytabPrincipal without the static properties and submit the patch for this. Hadoop is great because it suits all kinds of customer's scenarios. I hope we can fix this if it is identified as bug even though it is not the best and secured solution for all users. was (Author: tobe): Thanks [~eyang] for the suggestion. I have to declare that the client side JVM will not get the keytab in our scenario and all the logic of HDFS operation are implemented in server side which is different form Oozie. I agree that using multi-keytab is not to best solution which requires clients to submit the auth files. But in the real world, we are not allowed to edit the core-site.xml to add the ProxyUser even though it is the one-off operation. Back to this issue, now Hadoop provide `loginUserFromKeytabAndReturnUGI()` and allow to create multiple UGI and we can login with multiple keytab at the same process. The problem is in `reloginFromKeytab()` which uses the incorrect keytabPrincipal. I think we can fix that by using the correct keytabPrincipal without the static properties and submit the path for this. Hadoop is great because it suits all kinds of customer's scenarios. I hope we can fix this if it is identified as bug even though it is not the best and secured solution for all users. > Re-login from keytab for multiple UGI will use the same and incorrect > keytabPrincipal > - > > Key: HADOOP-16122 > URL: https://issues.apache.org/jira/browse/HADOOP-16122 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: chendihao >Priority: Major > > In our scenario, we have a service to allow multiple users to access HDFS > with their keytab. The users use different Hadoop user and permission to > access the HDFS files. This service will run with multi-threads and create > independent UGI object for each user and use its own UGI to create Hadoop > FileSystem object to read/write HDFS. > > Since we have multiple Hadoop users in the same process, we have to use > `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The > `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. > Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` > before the kerberos ticket expires. > > The issue is that `reloginFromKeytab` will always re-login with the same and > incorrect keytab instead of the one from the expected UGI object. Because of > this issue, we can only support multiple Hadoop users to login with their own > keytabs at the first time but not re-login when the tickets expire. The logic > of login and re-login is slightly different especially for updating the > global static properties and it may be the bug of the implementation of that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal
[ https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773617#comment-16773617 ] chendihao commented on HADOOP-16122: Thanks [~eyang] for the suggestion. I have declared that the client side JVM will not get the keytab in our scenario and all the logic of HDFS operation are implemented in server side which is different form Oozie. I agree that using multi-keytab is not to best solution which requires clients to submit the auth files. But in the real world, we are not allowed to edit the core-site.xml to add the ProxyUser even though it is the one-off operation. Back to this issue, now Hadoop provide `loginUserFromKeytabAndReturnUGI()` and allow to create multiple UGI and we can login with multiple keytab at the same process. The problem is in `reloginFromKeytab()` which uses the incorrect keytabPrincipal. I think we can fix that by using the correct keytabPrincipal without the static properties and submit the path for this. Hadoop is great because it suits all kinds of customer's scenarios. I hope we can fix this if it is identified as bug even though it is not the best and secured solution for all users. > Re-login from keytab for multiple UGI will use the same and incorrect > keytabPrincipal > - > > Key: HADOOP-16122 > URL: https://issues.apache.org/jira/browse/HADOOP-16122 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: chendihao >Priority: Major > > In our scenario, we have a service to allow multiple users to access HDFS > with their keytab. The users use different Hadoop user and permission to > access the HDFS files. This service will run with multi-threads and create > independent UGI object for each user and use its own UGI to create Hadoop > FileSystem object to read/write HDFS. > > Since we have multiple Hadoop users in the same process, we have to use > `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The > `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. > Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` > before the kerberos ticket expires. > > The issue is that `reloginFromKeytab` will always re-login with the same and > incorrect keytab instead of the one from the expected UGI object. Because of > this issue, we can only support multiple Hadoop users to login with their own > keytabs at the first time but not re-login when the tickets expire. The logic > of login and re-login is slightly different especially for updating the > global static properties and it may be the bug of the implementation of that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal
[ https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773617#comment-16773617 ] chendihao edited comment on HADOOP-16122 at 2/21/19 2:56 AM: - Thanks [~eyang] for the suggestion. I have to declare that the client side JVM will not get the keytab in our scenario and all the logic of HDFS operation are implemented in server side which is different form Oozie. I agree that using multi-keytab is not to best solution which requires clients to submit the auth files. But in the real world, we are not allowed to edit the core-site.xml to add the ProxyUser even though it is the one-off operation. Back to this issue, now Hadoop provide `loginUserFromKeytabAndReturnUGI()` and allow to create multiple UGI and we can login with multiple keytab at the same process. The problem is in `reloginFromKeytab()` which uses the incorrect keytabPrincipal. I think we can fix that by using the correct keytabPrincipal without the static properties and submit the path for this. Hadoop is great because it suits all kinds of customer's scenarios. I hope we can fix this if it is identified as bug even though it is not the best and secured solution for all users. was (Author: tobe): Thanks [~eyang] for the suggestion. I have declared that the client side JVM will not get the keytab in our scenario and all the logic of HDFS operation are implemented in server side which is different form Oozie. I agree that using multi-keytab is not to best solution which requires clients to submit the auth files. But in the real world, we are not allowed to edit the core-site.xml to add the ProxyUser even though it is the one-off operation. Back to this issue, now Hadoop provide `loginUserFromKeytabAndReturnUGI()` and allow to create multiple UGI and we can login with multiple keytab at the same process. The problem is in `reloginFromKeytab()` which uses the incorrect keytabPrincipal. I think we can fix that by using the correct keytabPrincipal without the static properties and submit the path for this. Hadoop is great because it suits all kinds of customer's scenarios. I hope we can fix this if it is identified as bug even though it is not the best and secured solution for all users. > Re-login from keytab for multiple UGI will use the same and incorrect > keytabPrincipal > - > > Key: HADOOP-16122 > URL: https://issues.apache.org/jira/browse/HADOOP-16122 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: chendihao >Priority: Major > > In our scenario, we have a service to allow multiple users to access HDFS > with their keytab. The users use different Hadoop user and permission to > access the HDFS files. This service will run with multi-threads and create > independent UGI object for each user and use its own UGI to create Hadoop > FileSystem object to read/write HDFS. > > Since we have multiple Hadoop users in the same process, we have to use > `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The > `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. > Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` > before the kerberos ticket expires. > > The issue is that `reloginFromKeytab` will always re-login with the same and > incorrect keytab instead of the one from the expected UGI object. Because of > this issue, we can only support multiple Hadoop users to login with their own > keytabs at the first time but not re-login when the tickets expire. The logic > of login and re-login is slightly different especially for updating the > global static properties and it may be the bug of the implementation of that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus
[ https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773615#comment-16773615 ] lqjacklee commented on HADOOP-15920: HADOOP-15920-06.patch fix checkstyle > get patch for S3a nextReadPos(), through Yetus > -- > > Key: HADOOP-15920 > URL: https://issues.apache.org/jira/browse/HADOOP-15920 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.1 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Major > Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, > HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, > HADOOP-15920-06.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus
[ https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lqjacklee updated HADOOP-15920: --- Attachment: HADOOP-15920-06.patch > get patch for S3a nextReadPos(), through Yetus > -- > > Key: HADOOP-15920 > URL: https://issues.apache.org/jira/browse/HADOOP-15920 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.1 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Major > Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, > HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, > HADOOP-15920-06.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773612#comment-16773612 ] Hudson commented on HADOOP-15813: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16014/]) HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by (weichiu: rev a87e458432609b7a35a2abd6410b02e8a2ffc974) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > Attachments: HADOOP-15813.patch, HADOOP-15813.patch, KMS > throughput.png, profiler after HADOOP-15813.png, profiler prior to > HADOOP-15813.png > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15813: - Resolution: Fixed Status: Resolved (was: Patch Available) Pushed the patch to trunk ~ all the way to branch-2.8. Thanks [~daryn] for identifying such a big perf boost. Really appreciate the work! > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > Attachments: HADOOP-15813.patch, HADOOP-15813.patch, KMS > throughput.png, profiler after HADOOP-15813.png, profiler prior to > HADOOP-15813.png > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15813: - Fix Version/s: 3.1.3 2.9.3 3.2.1 2.8.6 3.3.0 3.0.4 2.10.0 > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > Attachments: HADOOP-15813.patch, HADOOP-15813.patch, KMS > throughput.png, profiler after HADOOP-15813.png, profiler prior to > HADOOP-15813.png > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections
[ https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773599#comment-16773599 ] Hadoop QA commented on HADOOP-16126: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 102 unchanged - 0 fixed = 103 total (was 102) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 32s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 94m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16126 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12959521/c16126_20190220.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ad1beb4a9d3c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 371a6db | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15953/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15953/testReport/ | | Max. process+thread count | 1537 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hado
[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773597#comment-16773597 ] Hadoop QA commented on HADOOP-16127: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 46s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 103 unchanged - 3 fixed = 104 total (was 106) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 18s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 92m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16127 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12959520/c16127_20190220.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 16c896228764 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 371a6db | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15954/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15954/testReport/ | | Max. process+thread count | 1388 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoo
[jira] [Commented] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773590#comment-16773590 ] Hadoop QA commented on HADOOP-15813: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HADOOP-15813 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-15813 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15955/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15813.patch, HADOOP-15813.patch, KMS > throughput.png, profiler after HADOOP-15813.png, profiler prior to > HADOOP-15813.png > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15743) Jetty and SSL tunings to stabilize KMS performance
[ https://issues.apache.org/jira/browse/HADOOP-15743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773587#comment-16773587 ] Wei-Chiu Chuang commented on HADOOP-15743: -- Note: https://bugs.openjdk.java.net/browse/JDK-8210985 In JDK12, default javax.net.ssl.sessionCacheSize is reduced to 20480 Reducing SSL session cache size doesn't seem to improve throughput, even though it's probably a good idea to do it. I reduced max thread to 16 or 32 and they seem to give me better performance than the default. We set idle timeout = 1s. I saw as many as 7k open file descriptors on KMS server when it had ~2.9k decrypt_eek per second. It seems LowResourceMonitor thread is not created by default. At least in Hadoop 3, KMS server doesn't have this thread. > Jetty and SSL tunings to stabilize KMS performance > --- > > Key: HADOOP-15743 > URL: https://issues.apache.org/jira/browse/HADOOP-15743 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Priority: Major > > The KMS has very low throughput with high client failure rates. The > following config options will "stabilize" the KMS under load: > # Disable ECDH algos because java's SSL engine is inexplicably HORRIBLE. > # Reduce SSL session cache size (unlimited) and ttl (24h). The memory cache > has very poor performance and causes extreme GC collection pressure. Load > balancing diminishes the effectiveness of the cache to 1/N-hosts anyway. > ** -Djavax.net.ssl.sessionCacheSize=1000 > ** -Djavax.net.ssl.sessionCacheTimeout=6 > # Completely disable thread LowResourceMonitor to stop jetty from > immediately closing incoming connections during connection bursts. Client > retries cause jetty to remain in a low resource state until many clients fail > and cause thousands of sockets to linger in various close related states. > # Set min/max threads to 4x processors. Jetty recommends only 50 to 500 > threads. Java's SSL engine has excessive synchronization that limits > performance anyway. > # Set https idle timeout to 6s. > # Significantly increase max fds to at least 128k. Recommend using a VIP > load balancer with a lower limit. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16123) Lack of protoc in docker
[ https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lqjacklee reassigned HADOOP-16123: -- Assignee: (was: lqjacklee) > Lack of protoc in docker > > > Key: HADOOP-16123 > URL: https://issues.apache.org/jira/browse/HADOOP-16123 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.0 >Reporter: lqjacklee >Priority: Minor > > During build the source code , do the steps as below : > > 1, run docker daemon > 2, ./start-build-env.sh > 3, sudo mvn clean install -DskipTests -Pnative > the response prompt that : > [ERROR] Failed to execute goal > org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) > on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: > 'protoc --version' did not return a version -> > [Help 1] > However , when execute the command : whereis protoc > liu@a65d187055f9:~/hadoop$ whereis protoc > protoc: /opt/protobuf/bin/protoc > > the PATH value : > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin > > liu@a65d187055f9:~/hadoop$ protoc --version > libprotoc 2.5.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773582#comment-16773582 ] Wei-Chiu Chuang commented on HADOOP-15813: -- KMS decrypt_eek throughput !KMS throughput.png! > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15813.patch, HADOOP-15813.patch, KMS > throughput.png, profiler after HADOOP-15813.png, profiler prior to > HADOOP-15813.png > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15813: - Attachment: KMS throughput.png > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15813.patch, HADOOP-15813.patch, KMS > throughput.png, profiler after HADOOP-15813.png, profiler prior to > HADOOP-15813.png > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus
[ https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773579#comment-16773579 ] lqjacklee commented on HADOOP-15920: Thanks, I will format it. > get patch for S3a nextReadPos(), through Yetus > -- > > Key: HADOOP-15920 > URL: https://issues.apache.org/jira/browse/HADOOP-15920 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.1 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Major > Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, > HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773552#comment-16773552 ] Wei-Chiu Chuang edited comment on HADOOP-15813 at 2/21/19 12:58 AM: For reference, here's the profiler output of KMS server, prior to the patch: !profiler prior to HADOOP-15813.png! After: !profiler after HADOOP-15813.png! was (Author: jojochuang): For reference, here's the profiler output of KMS server, prior to the patch: !Screen Shot 2019-02-20 at 3.37.05 PM.png! > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15813.patch, HADOOP-15813.patch, profiler after > HADOOP-15813.png, profiler prior to HADOOP-15813.png > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15813: - Attachment: profiler prior to HADOOP-15813.png profiler after HADOOP-15813.png > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15813.patch, HADOOP-15813.patch, profiler after > HADOOP-15813.png, profiler prior to HADOOP-15813.png > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773551#comment-16773551 ] Wei-Chiu Chuang commented on HADOOP-15813: -- +1 The patch dramatically improves KMS throughput from 2900 decrypt_eek/s to 8100 decrypt_eek/s in my test. Benchmark setup: {noformat} 4-node cluster, each node 4 core Intel Xeon 2.5Ghz, 25GB memory CentOS 7.4, CDH 6.2 + CM 6.2, Cloudera Navigator Key Trustee Oracle Java 8u181 One KMS server. Heap: 5GB, max thread: 32 {noformat} Ran the KMS benchmark tool (HADOOP-15967) on 3 other nodes to fully saturate the KMS server: {noformat} HADOOP_CLIENT_OPTS="-Xms10g -Xmx10g" hadoop jar /tmp/hadoop-kms-3.0.0-cdh6.1.0-tests.jar org.apache.hadoop.crypto.key.kms.server.KMSBenchmark -op decrypt -threads 100 -numops 200 {noformat} Additionally, used heap size = 2GB (prior to the patch, heap size would grow until the max heap size), open file descriptor 600 (prior to the patch, open file descriptor would grow to 7000) > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15813.patch, HADOOP-15813.patch > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15813) Enable more reliable SSL connection reuse
[ https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773552#comment-16773552 ] Wei-Chiu Chuang commented on HADOOP-15813: -- For reference, here's the profiler output of KMS server, prior to the patch: !Screen Shot 2019-02-20 at 3.37.05 PM.png! > Enable more reliable SSL connection reuse > - > > Key: HADOOP-15813 > URL: https://issues.apache.org/jira/browse/HADOOP-15813 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15813.patch, HADOOP-15813.patch > > > The java keep-alive cache relies on instance equivalence of the SSL socket > factory. In many java versions, SSLContext#getSocketFactory always returns a > new instance which completely breaks the cache. Clients flooding a service > with lingering per-request connections that can lead to port exhaustion. The > hadoop SSLFactory should cache the socket factory associated with the context. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773540#comment-16773540 ] Hadoop QA commented on HADOOP-16125: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 22 unchanged - 2 fixed = 22 total (was 24) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 26s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}100m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16125 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12959508/HADOOP-16125.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux b53c1ac34949 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1bea785 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15952/testReport/ | | Max. process+thread count | 1350 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15952/conso
[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773522#comment-16773522 ] Tsz Wo Nicholas Sze commented on HADOOP-16127: -- > L475: include the toString value of the caught IOE in the new one, so if the > full stack trace is lost, the root cause is preserved Sure. > L1360: you've removed all sleeps here entirely. Is that OK? Yes, we use wait-notify instead of sleep to minimize the sleep/wait time. Thanks for the review. Here is a new patch: c16127_20190220.patch > In ipc.Client, put a new connection could happen after stop > --- > > Key: HADOOP-16127 > URL: https://issues.apache.org/jira/browse/HADOOP-16127 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16127_20190219.patch, c16127_20190220.patch > > > In getConnection(..), running can be initially true but becomes false before > putIfAbsent. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections
[ https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773525#comment-16773525 ] Tsz Wo Nicholas Sze commented on HADOOP-16126: -- c16126_20190220.patch: address [~ste...@apache.org]'s comments. > ipc.Client.stop() may sleep too long to wait for all connections > > > Key: HADOOP-16126 > URL: https://issues.apache.org/jira/browse/HADOOP-16126 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16126_20190219.patch, c16126_20190220.patch > > > {code} > //Client.java > public void stop() { > ... > // wait until all connections are closed > while (!connections.isEmpty()) { > try { > Thread.sleep(100); > } catch (InterruptedException e) { > } > } > ... > } > {code} > In the code above, the sleep time is 100ms. We found that simply changing > the sleep time to 10ms could improve a Hive job running time by 10x. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773526#comment-16773526 ] Tsz Wo Nicholas Sze commented on HADOOP-16127: -- > ps: set your version info for where you intend to apply this Just have set target version to 3.1.2. Thanks. > In ipc.Client, put a new connection could happen after stop > --- > > Key: HADOOP-16127 > URL: https://issues.apache.org/jira/browse/HADOOP-16127 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16127_20190219.patch, c16127_20190220.patch > > > In getConnection(..), running can be initially true but becomes false before > putIfAbsent. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HADOOP-16127: - Target Version/s: 3.1.2 > In ipc.Client, put a new connection could happen after stop > --- > > Key: HADOOP-16127 > URL: https://issues.apache.org/jira/browse/HADOOP-16127 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16127_20190219.patch, c16127_20190220.patch > > > In getConnection(..), running can be initially true but becomes false before > putIfAbsent. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections
[ https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HADOOP-16126: - Attachment: c16126_20190220.patch > ipc.Client.stop() may sleep too long to wait for all connections > > > Key: HADOOP-16126 > URL: https://issues.apache.org/jira/browse/HADOOP-16126 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16126_20190219.patch, c16126_20190220.patch > > > {code} > //Client.java > public void stop() { > ... > // wait until all connections are closed > while (!connections.isEmpty()) { > try { > Thread.sleep(100); > } catch (InterruptedException e) { > } > } > ... > } > {code} > In the code above, the sleep time is 100ms. We found that simply changing > the sleep time to 10ms could improve a Hive job running time by 10x. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HADOOP-16127: - Attachment: c16127_20190220.patch > In ipc.Client, put a new connection could happen after stop > --- > > Key: HADOOP-16127 > URL: https://issues.apache.org/jira/browse/HADOOP-16127 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16127_20190219.patch, c16127_20190220.patch > > > In getConnection(..), running can be initially true but becomes false before > putIfAbsent. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HADOOP-16127: - Attachment: (was: c16127_20190220.patch) > In ipc.Client, put a new connection could happen after stop > --- > > Key: HADOOP-16127 > URL: https://issues.apache.org/jira/browse/HADOOP-16127 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16127_20190219.patch, c16127_20190220.patch > > > In getConnection(..), running can be initially true but becomes false before > putIfAbsent. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HADOOP-16127: - Attachment: c16127_20190220.patch > In ipc.Client, put a new connection could happen after stop > --- > > Key: HADOOP-16127 > URL: https://issues.apache.org/jira/browse/HADOOP-16127 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16127_20190219.patch, c16127_20190220.patch > > > In getConnection(..), running can be initially true but becomes false before > putIfAbsent. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15811) Optimizations for Java's TLS performance
[ https://issues.apache.org/jira/browse/HADOOP-15811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773494#comment-16773494 ] Wei-Chiu Chuang commented on HADOOP-15811: -- Hmm. on 8u181, I am not seeing much different after applying the configs. > Optimizations for Java's TLS performance > > > Key: HADOOP-15811 > URL: https://issues.apache.org/jira/browse/HADOOP-15811 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 1.0.0 >Reporter: Daryn Sharp >Priority: Major > > Java defaults to using /dev/random and disables intrinsic methods used in hot > code paths. Both cause highly synchronized impls to be used that > significantly degrade performance. > * -Djava.security.egd=file:/dev/urandom > * -XX:+UseMontgomerySquareIntrinsic > * -XX:+UseMontgomeryMultiplyIntrinsic > * -XX:+UseSquareToLenIntrinsic > * -XX:+UseMultiplyToLenIntrinsic > These settings significantly boost KMS server performance. Under load, > threads are not jammed in the SSLEngine. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773447#comment-16773447 ] Lukas Majercak commented on HADOOP-16125: - Thanks [~elgoiri]. I've added patch 004 with a small change: changing the log message in switchBindUser to only show the exception message rather than the full stack trace. > Support multiple bind users in LdapGroupsMapping > > > Key: HADOOP-16125 > URL: https://issues.apache.org/jira/browse/HADOOP-16125 > Project: Hadoop Common > Issue Type: New Feature > Components: common, security >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch, > HADOOP-16125.003.patch, HADOOP-16125.004.patch > > > Currently, LdapGroupsMapping supports only a single user to bind to when > connecting to LDAP. This can be problematic if such user's password needs to > be reset. > The proposal is to support multiple such users and switch between them if > necessary, more info in GroupsMapping.md / core-default.xml in the patches. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HADOOP-16125: Attachment: HADOOP-16125.004.patch > Support multiple bind users in LdapGroupsMapping > > > Key: HADOOP-16125 > URL: https://issues.apache.org/jira/browse/HADOOP-16125 > Project: Hadoop Common > Issue Type: New Feature > Components: common, security >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch, > HADOOP-16125.003.patch, HADOOP-16125.004.patch > > > Currently, LdapGroupsMapping supports only a single user to bind to when > connecting to LDAP. This can be problematic if such user's password needs to > be reset. > The proposal is to support multiple such users and switch between them if > necessary, more info in GroupsMapping.md / core-default.xml in the patches. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections
[ https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773424#comment-16773424 ] Tsz Wo Nicholas Sze commented on HADOOP-16126: -- > Why the choice of 10ms? It is because 10ms works well in the test. 10ms is a long time in modern computers. So, it is still far away from busy waiting. As you already aware, we have HADOOP-16127 for a better (but more complicated) fix. The patch here is a safe, short term fix for the clusters which do not welcome big changes. > Can you tease this out as a private constant in the IPC file, just so it's > less hidden deep in the code. Will do. > ipc.Client.stop() may sleep too long to wait for all connections > > > Key: HADOOP-16126 > URL: https://issues.apache.org/jira/browse/HADOOP-16126 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16126_20190219.patch > > > {code} > //Client.java > public void stop() { > ... > // wait until all connections are closed > while (!connections.isEmpty()) { > try { > Thread.sleep(100); > } catch (InterruptedException e) { > } > } > ... > } > {code} > In the code above, the sleep time is 100ms. We found that simply changing > the sleep time to 10ms could improve a Hive job running time by 10x. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS
[ https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773373#comment-16773373 ] Larry McCay commented on HADOOP-16105: -- [~ste...@apache.org] - This looks straight forward enough to me. +1 > WASB in secure mode does not set connectingUsingSAS > --- > > Key: HADOOP-16105 > URL: https://issues.apache.org/jira/browse/HADOOP-16105 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 3.2.0, 3.0.3, 2.8.5, 3.1.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-16105-001.patch, HADOOP-16105-002.patch > > > If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to > true, which can break things -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15567) Support expiry time in AdlFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-15567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773364#comment-16773364 ] Hadoop QA commented on HADOOP-15567: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 10s{color} | {color:orange} hadoop-tools/hadoop-azure-datalake: The patch generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15567 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929650/HADOOP-15567.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2191af6d1a2d 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / aa3ad36 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15951/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure-datalake.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15951/testReport/ | | Max. process+thread count | 328 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-azure-datalake U: hadoop-tools/hadoop-azure-datalake | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15951/console
[jira] [Created] (HADOOP-16128) Some S3A tests leak filesystem instances
Steve Loughran created HADOOP-16128: --- Summary: Some S3A tests leak filesystem instances Key: HADOOP-16128 URL: https://issues.apache.org/jira/browse/HADOOP-16128 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.1.2 Reporter: Steve Loughran There's a few S3a ITests which call filesystem.newInstance() but which don't clean up after by closing it. This leaks instances, threadpools, etc. * ITestS3AAWSCredentialsProvider.testAnonymousProvider() * ITestS3GuardWriteBack -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15567) Support expiry time in AdlFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-15567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773306#comment-16773306 ] Íñigo Goiri commented on HADOOP-15567: -- Thanks [~huanbang1993] for the work. Minor comments: * Can we make {{EXPIRY_TIME_DURATION = TimeUnit.DAYS.toMillis(7)}}. * Can you make the constants (EPS and EXPIRY_TIME_DURATION ) final? * Not sure we need EPS, by the way assertEquals has a method for the check with difference: assertEquals(EXPIRY_TIME_DURATION, diff, 500L); * We should make {{assertPathExist()}} a little more friendly and report that the file does not exists in the message. * When expecting exceptions, we should use LambaUtils or exception rules. > Support expiry time in AdlFileSystem > > > Key: HADOOP-15567 > URL: https://issues.apache.org/jira/browse/HADOOP-15567 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Anbang Hu >Priority: Major > Attachments: HADOOP-15567.000.patch, HADOOP-15567.001.patch, > live-test-result.png > > > ADLS supports setting an expiration time for a file. > We can leverage Xattr in FileSystem to set the expiration time. > This could use the same xattr as HDFS-6382 and the interface from HDFS-6525. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773301#comment-16773301 ] Steve Loughran commented on HADOOP-15999: - BTW I like assert statements to always include enough information in messages to be able to make sense of the failure without having to look into the source code to find the assert and then guess what could be up. As an example: {code} Assert.assertNotEquals(rawFileStatus.getModificationTime(), guardedFileStatus.getModificationTime()); {code} to {code} Assert.assertNotEquals("Modification time of raw matches that of guarded \nraw=" + rawFileStatus + " guarded=" + guardedFileStatus rawFileStatus.getModificationTime(), guardedFileStatus.getModificationTime()); {code} Just imagine that you've seen a jenkins build fail, and all you have is that assertion text. What information does it need to have to help you understand what has gone wrong? > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL: https://issues.apache.org/jira/browse/HADOOP-15999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, > HADOOP-15999.003.patch, HADOOP-15999.004.patch, out-of-band-operations.patch > > > S3Guard was initially done on the premise that a new MetadataStore would be > the source of truth, and that it wouldn't provide guarantees if updates were > done without using S3Guard. > I've been seeing increased demand for better support for scenarios where > operations are done on the data that can't reasonably be done with S3Guard > involved. For example: > * A file is deleted using S3Guard, and replaced by some other tool. S3Guard > can't tell the difference between the new file and delete / list > inconsistency and continues to treat the file as deleted. > * An S3Guard-ed file is overwritten by a longer file by some other tool. When > reading the file, only the length of the original file is read. > We could possibly have smarter behavior here by querying both S3 and the > MetadataStore (even in cases where we may currently only query the > MetadataStore in getFileStatus) and use whichever one has the higher modified > time. > This kills the performance boost we currently get in some workloads with the > short-circuited getFileStatus, but we could keep it with authoritative mode > which should give a larger performance boost. At least we'd get more > correctness without authoritative mode and a clear declaration of when we can > make the assumptions required to short-circuit the process. If we can't > consider S3Guard the source of truth, we need to defer to S3 more. > We'd need to be extra sure of any locality / time zone issues if we start > relying on mod_time more directly, but currently we're tracking the > modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773288#comment-16773288 ] Steve Loughran commented on HADOOP-15999: - -1 from {{mvn verify -Dtest=moo -Ds3guard -Ddynamodb -Dscale -Dit.test=ITestS3GuardOutOfBandOperations }} {code} [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.064 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations [ERROR] testListingLongerLengthOverwriteAuthoritative(org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations) Time elapsed: 1.562 s <<< FAILURE! java.lang.AssertionError: Values should be different. Actual: 1550689095000 at org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.overwriteFileInListing(ITestS3GuardOutOfBandOperations.java:310) at org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.testListingLongerLengthOverwriteAuthoritative(ITestS3GuardOutOfBandOperations.java:183) [INFO] [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestS3GuardOutOfBandOperations.testListingLongerLengthOverwriteAuthoritative:183->overwriteFileInListing:310->Assert.assertNotEquals:211->Assert.assertNotEquals:199->Assert.failEquals:185->Assert.fail:88 Values should be different. Actual: 1550689095000 [INFO] {code} > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL: https://issues.apache.org/jira/browse/HADOOP-15999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, > HADOOP-15999.003.patch, HADOOP-15999.004.patch, out-of-band-operations.patch > > > S3Guard was initially done on the premise that a new MetadataStore would be > the source of truth, and that it wouldn't provide guarantees if updates were > done without using S3Guard. > I've been seeing increased demand for better support for scenarios where > operations are done on the data that can't reasonably be done with S3Guard > involved. For example: > * A file is deleted using S3Guard, and replaced by some other tool. S3Guard > can't tell the difference between the new file and delete / list > inconsistency and continues to treat the file as deleted. > * An S3Guard-ed file is overwritten by a longer file by some other tool. When > reading the file, only the length of the original file is read. > We could possibly have smarter behavior here by querying both S3 and the > MetadataStore (even in cases where we may currently only query the > MetadataStore in getFileStatus) and use whichever one has the higher modified > time. > This kills the performance boost we currently get in some workloads with the > short-circuited getFileStatus, but we could keep it with authoritative mode > which should give a larger performance boost. At least we'd get more > correctness without authoritative mode and a clear declaration of when we can > make the assumptions required to short-circuit the process. If we can't > consider S3Guard the source of truth, we need to defer to S3 more. > We'd need to be extra sure of any locality / time zone issues if we start > relying on mod_time more directly, but currently we're tracking the > modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773248#comment-16773248 ] Hadoop QA commented on HADOOP-15999: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 0s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 45s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15999 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12959468/HADOOP-15999.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 15b3c65cf169 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / aa3ad36 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15950/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15950/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL:
[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773201#comment-16773201 ] Gabor Bota commented on HADOOP-15999: - Thanks [~ste...@apache.org]. This was an issue when using LocalMS. I've fixed the issue in the test with adding a line to {{setAllowAuthoritativeInFs}}: {{realMs = guardedFs.getMetadataStore();}} Tests are now running clean against ireland with dynamo, local, and none MSs > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL: https://issues.apache.org/jira/browse/HADOOP-15999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, > HADOOP-15999.003.patch, HADOOP-15999.004.patch, out-of-band-operations.patch > > > S3Guard was initially done on the premise that a new MetadataStore would be > the source of truth, and that it wouldn't provide guarantees if updates were > done without using S3Guard. > I've been seeing increased demand for better support for scenarios where > operations are done on the data that can't reasonably be done with S3Guard > involved. For example: > * A file is deleted using S3Guard, and replaced by some other tool. S3Guard > can't tell the difference between the new file and delete / list > inconsistency and continues to treat the file as deleted. > * An S3Guard-ed file is overwritten by a longer file by some other tool. When > reading the file, only the length of the original file is read. > We could possibly have smarter behavior here by querying both S3 and the > MetadataStore (even in cases where we may currently only query the > MetadataStore in getFileStatus) and use whichever one has the higher modified > time. > This kills the performance boost we currently get in some workloads with the > short-circuited getFileStatus, but we could keep it with authoritative mode > which should give a larger performance boost. At least we'd get more > correctness without authoritative mode and a clear declaration of when we can > make the assumptions required to short-circuit the process. If we can't > consider S3Guard the source of truth, we need to defer to S3 more. > We'd need to be extra sure of any locality / time zone issues if we start > relying on mod_time more directly, but currently we're tracking the > modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal
[ https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773229#comment-16773229 ] Eric Yang commented on HADOOP-16122: [~tobe] Access control must be done on server side to keep system secure. ProxyUser can map to group of users. As long as administrator manages the membership of users there is no configuration change required for long term maintenance of Hadoop. Static keytab and principal are to ensure that client side JVM does not have ability to switch user without consent of server side access control. If code is modified to allow client side JVM to switch user without server side authorization, the system become no security. This allows any mapreduce task or yarn container to become any other users. UesrGroupInformation make use of kinit to login user and ticket cache to determine the expiration time of current Kerberos session. Ticket cache file maps to actual unix user who runs the process. The content of ticket cache will be switched to the most recently authenticated tgt. Multi-keytab login proposal works against security checks that are placed in the OS (file system permission, kerberos ticket cache filename format, etc). Even if developer managed to break every piece of the security check, the end result will equal to no security. I am sorry to say that this proposal will not be accepted by Hadoop community. > Re-login from keytab for multiple UGI will use the same and incorrect > keytabPrincipal > - > > Key: HADOOP-16122 > URL: https://issues.apache.org/jira/browse/HADOOP-16122 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: chendihao >Priority: Major > > In our scenario, we have a service to allow multiple users to access HDFS > with their keytab. The users use different Hadoop user and permission to > access the HDFS files. This service will run with multi-threads and create > independent UGI object for each user and use its own UGI to create Hadoop > FileSystem object to read/write HDFS. > > Since we have multiple Hadoop users in the same process, we have to use > `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The > `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. > Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` > before the kerberos ticket expires. > > The issue is that `reloginFromKeytab` will always re-login with the same and > incorrect keytab instead of the one from the expected UGI object. Because of > this issue, we can only support multiple Hadoop users to login with their own > keytabs at the first time but not re-login when the tickets expire. The logic > of login and re-login is slightly different especially for updating the > global static properties and it may be the bug of the implementation of that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-15999: Attachment: HADOOP-15999.004.patch > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL: https://issues.apache.org/jira/browse/HADOOP-15999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, > HADOOP-15999.003.patch, HADOOP-15999.004.patch, out-of-band-operations.patch > > > S3Guard was initially done on the premise that a new MetadataStore would be > the source of truth, and that it wouldn't provide guarantees if updates were > done without using S3Guard. > I've been seeing increased demand for better support for scenarios where > operations are done on the data that can't reasonably be done with S3Guard > involved. For example: > * A file is deleted using S3Guard, and replaced by some other tool. S3Guard > can't tell the difference between the new file and delete / list > inconsistency and continues to treat the file as deleted. > * An S3Guard-ed file is overwritten by a longer file by some other tool. When > reading the file, only the length of the original file is read. > We could possibly have smarter behavior here by querying both S3 and the > MetadataStore (even in cases where we may currently only query the > MetadataStore in getFileStatus) and use whichever one has the higher modified > time. > This kills the performance boost we currently get in some workloads with the > short-circuited getFileStatus, but we could keep it with authoritative mode > which should give a larger performance boost. At least we'd get more > correctness without authoritative mode and a clear declaration of when we can > make the assumptions required to short-circuit the process. If we can't > consider S3Guard the source of truth, we need to defer to S3 more. > We'd need to be extra sure of any locality / time zone issues if we start > relying on mod_time more directly, but currently we're tracking the > modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files
[ https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-15625: Attachment: (was: HADOOP-15999.004.patch) > S3A input stream to use etags to detect changed source files > > > Key: HADOOP-15625 > URL: https://issues.apache.org/jira/browse/HADOOP-15625 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, > HADOOP-15625-003.patch > > > S3A input stream doesn't handle changing source files any better than the > other cloud store connectors. Specifically: it doesn't noticed it has > changed, caches the length from startup, and whenever a seek triggers a new > GET, you may get one of: old data, new data, and even perhaps go from new > data to old data due to eventual consistency. > We can't do anything to stop this, but we could detect changes by > # caching the etag of the first HEAD/GET (we don't get that HEAD on open with > S3Guard, BTW) > # on future GET requests, verify the etag of the response > # raise an IOE if the remote file changed during the read. > It's a more dramatic failure, but it stops changes silently corrupting things. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files
[ https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-15625: Attachment: HADOOP-15999.004.patch > S3A input stream to use etags to detect changed source files > > > Key: HADOOP-15625 > URL: https://issues.apache.org/jira/browse/HADOOP-15625 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, > HADOOP-15625-003.patch, HADOOP-15999.004.patch > > > S3A input stream doesn't handle changing source files any better than the > other cloud store connectors. Specifically: it doesn't noticed it has > changed, caches the length from startup, and whenever a seek triggers a new > GET, you may get one of: old data, new data, and even perhaps go from new > data to old data due to eventual consistency. > We can't do anything to stop this, but we could detect changes by > # caching the etag of the first HEAD/GET (we don't get that HEAD on open with > S3Guard, BTW) > # on future GET requests, verify the etag of the response > # raise an IOE if the remote file changed during the read. > It's a more dramatic failure, but it stops changes silently corrupting things. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] elek opened a new pull request #505: HDDS-1145. Add optional web server to the Ozone freon test tool
elek opened a new pull request #505: HDDS-1145. Add optional web server to the Ozone freon test tool URL: https://github.com/apache/hadoop/pull/505 Recently we improved the default HttpServer to support prometheus monitoring and java profiling. It would be very useful to enable the same options for freon testing: 1. We need a simple way to profile freon and check the problems 2. Long running freons should be monitored We can create a new optional FreonHttpServer which includes all the required servlets by default. See: https://issues.apache.org/jira/browse/HDDS-1145 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16120) Lazily allocate KMS delegation tokens
[ https://issues.apache.org/jira/browse/HADOOP-16120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773161#comment-16773161 ] Ruslan Dautkhanov commented on HADOOP-16120: Thanks for explaining guys that it's not possible with the current Hadoop DT architecture.. > Lazily allocate KMS delegation tokens > - > > Key: HADOOP-16120 > URL: https://issues.apache.org/jira/browse/HADOOP-16120 > Project: Hadoop Common > Issue Type: Improvement > Components: kms, security >Affects Versions: 2.8.5, 3.1.2 >Reporter: Ruslan Dautkhanov >Priority: Major > > We noticed that HDFS clients talk to KMS even when they try to access not > encrypted databases.. Is there is a way to make HDFS clients to talk to KMS > servers *only* when they need access to encrypted data? Since we will be > encrypting only one database (and 50+ other much more critical production > databases will not be encrypted), in case if KMS is down for maintenance or > for some other reason, we want to limit outage only to encrypted data. > In other words, it would be great if KMS delegation toekns would be allocated > lazily - on first request to encrypted data. > This could be a non-default option to lazily allocate KMS delegation tokens, > to improve availability of non-encrypted data. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773104#comment-16773104 ] Steve Loughran commented on HADOOP-15999: - -1 ran the tests myself & got some NPEs {code} [INFO] Tests run: 45, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.128 s - in org.apache.hadoop.fs.s3a.select.ITestS3Select [INFO] [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestS3GuardOutOfBandOperations.testListingDeleteAuthoritative:196->deleteFileInListing:342 NullPointer [ERROR] ITestS3GuardOutOfBandOperations.testListingDeleteNotAuthoritative:203->deleteFileInListing:344 NullPointer [ERROR] ITestS3GuardOutOfBandOperations.testListingLongerLengthOverwriteAuthoritative:183->overwriteFileInListing:287 NullPointer [ERROR] ITestS3GuardOutOfBandOperations.testListingLongerLengthOverwriteNotAuthoritative:190->overwriteFileInListing:289 NullPointer [ERROR] ITestS3GuardOutOfBandOperations.testListingSameLengthOverwriteAuthoritative:170->overwriteFileInListing:287 NullPointer [ERROR] ITestS3GuardOutOfBandOperations.testListingSameLengthOverwriteNotAuthoritative:177->overwriteFileInListing:289 NullPointer [INFO] [ERROR] Tests run: 725, Failures: 0, Errors: 6, Skipped: 7 {code} full detail {code} [ERROR] testListingLongerLengthOverwriteNotAuthoritative(org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations) Time elapsed: 3.34 s <<< ERROR! java.lang.NullPointerException at org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.overwriteFileInListing(ITestS3GuardOutOfBandOperations.java:289) at org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.testListingLongerLengthOverwriteNotAuthoritative(ITestS3GuardOutOfBandOperations.java:190) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:745) [ERROR] testListingSameLengthOverwriteNotAuthoritative(org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations) Time elapsed: 1.694 s <<< ERROR! java.lang.NullPointerException at org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.overwriteFileInListing(ITestS3GuardOutOfBandOperations.java:289) at org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.testListingSameLengthOverwriteNotAuthoritative(ITestS3GuardOutOfBandOperations.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:745) [ERROR] testListingDeleteNotAuthoritative(org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations) Time ela
[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files
[ https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773081#comment-16773081 ] Ben Roling commented on HADOOP-15625: - Thanks Steve. I'll implement the configuration and testing as you suggest, although perhaps {{etag}} and {{client}} should be named {{etag-server}} and {{etag-client}}? This way if object version support is added {{client}} won't be ambiguous (client-side etag check or client-side versionId check)? Can we go back to the Exception type discussion briefly? Do you definitely want a subclass of EOFException for this? As I was stating in previous comments, it is seeming somewhat difficult to ensure such an exception doesn't get swallowed with a normal -1 response in some read() cases. > S3A input stream to use etags to detect changed source files > > > Key: HADOOP-15625 > URL: https://issues.apache.org/jira/browse/HADOOP-15625 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, > HADOOP-15625-003.patch > > > S3A input stream doesn't handle changing source files any better than the > other cloud store connectors. Specifically: it doesn't noticed it has > changed, caches the length from startup, and whenever a seek triggers a new > GET, you may get one of: old data, new data, and even perhaps go from new > data to old data due to eventual consistency. > We can't do anything to stop this, but we could detect changes by > # caching the etag of the first HEAD/GET (we don't get that HEAD on open with > S3Guard, BTW) > # on future GET requests, verify the etag of the response > # raise an IOE if the remote file changed during the read. > It's a more dramatic failure, but it stops changes silently corrupting things. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled
[ https://issues.apache.org/jira/browse/HADOOP-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773037#comment-16773037 ] Hudson commented on HADOOP-16104: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16006 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16006/]) HADOOP-16104. Wasb tests to downgrade to skip when test a/c is namespace (iwasakims: rev aa3ad3660506382884324c4b8997973f5a68e29a) * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/NativeAzureFileSystemContract.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java * (edit) hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/AzureTestUtils.java * (edit) hadoop-tools/hadoop-azure/src/test/resources/wasb.xml > Wasb tests to downgrade to skip when test a/c is namespace enabled > -- > > Key: HADOOP-16104 > URL: https://issues.apache.org/jira/browse/HADOOP-16104 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, test >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Masatake Iwasaki >Priority: Major > Attachments: HADOOP-16104.001.patch > > > When you run the abfs tests with a namespace-enabled accounts, all the wasb > tests fail "don't yet work with namespace-enabled accounts". This should be > downgraded to a test skip, somehow -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled
[ https://issues.apache.org/jira/browse/HADOOP-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-16104: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.1 3.3.0 Status: Resolved (was: Patch Available) Committed this. > Wasb tests to downgrade to skip when test a/c is namespace enabled > -- > > Key: HADOOP-16104 > URL: https://issues.apache.org/jira/browse/HADOOP-16104 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, test >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Masatake Iwasaki >Priority: Major > Fix For: 3.3.0, 3.2.1 > > Attachments: HADOOP-16104.001.patch > > > When you run the abfs tests with a namespace-enabled accounts, all the wasb > tests fail "don't yet work with namespace-enabled accounts". This should be > downgraded to a test skip, somehow -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773023#comment-16773023 ] Hadoop QA commented on HADOOP-15999: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 32s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15999 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12959424/HADOOP-15999.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux defa0984c3df 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 41e18fe | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15949/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15949/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL: https:/
[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files
[ https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773019#comment-16773019 ] Hadoop QA commented on HADOOP-15958: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 1s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 1s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 12s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}185m 36s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 4s{color} | {color:red} The patch generated 28 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}342m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15958 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12959392/HADOOP-15958-002.patch | | Optional Tests | dupname asflicense shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux eccbc1d98f44 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1d30fd9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/15948/artifact/out/patch-unit-root.txt | | Te
[jira] [Commented] (HADOOP-16120) Lazily allocate KMS delegation tokens
[ https://issues.apache.org/jira/browse/HADOOP-16120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773016#comment-16773016 ] Steve Loughran commented on HADOOP-16120: - DTs are only collected on application launch, e.g. MapReduce, distcp, spark-submit, then marshalled to the far end. Once issued, they may be refreshed, but the rest of the running app (which doesn't have Kerberos credentials after all) is not only not going to ask for them, it's never going to have the ability to ask for them. I think this will have to be a WONTFIX. Sorry. > Lazily allocate KMS delegation tokens > - > > Key: HADOOP-16120 > URL: https://issues.apache.org/jira/browse/HADOOP-16120 > Project: Hadoop Common > Issue Type: Improvement > Components: kms, security >Affects Versions: 2.8.5, 3.1.2 >Reporter: Ruslan Dautkhanov >Priority: Major > > We noticed that HDFS clients talk to KMS even when they try to access not > encrypted databases.. Is there is a way to make HDFS clients to talk to KMS > servers *only* when they need access to encrypted data? Since we will be > encrypting only one database (and 50+ other much more critical production > databases will not be encrypted), in case if KMS is down for maintenance or > for some other reason, we want to limit outage only to encrypted data. > In other words, it would be great if KMS delegation toekns would be allocated > lazily - on first request to encrypted data. > This could be a non-default option to lazily allocate KMS delegation tokens, > to improve availability of non-encrypted data. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16120) Lazily allocate KMS delegation tokens
[ https://issues.apache.org/jira/browse/HADOOP-16120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16120. - Resolution: Won't Fix > Lazily allocate KMS delegation tokens > - > > Key: HADOOP-16120 > URL: https://issues.apache.org/jira/browse/HADOOP-16120 > Project: Hadoop Common > Issue Type: Improvement > Components: kms, security >Affects Versions: 2.8.5, 3.1.2 >Reporter: Ruslan Dautkhanov >Priority: Major > > We noticed that HDFS clients talk to KMS even when they try to access not > encrypted databases.. Is there is a way to make HDFS clients to talk to KMS > servers *only* when they need access to encrypted data? Since we will be > encrypting only one database (and 50+ other much more critical production > databases will not be encrypted), in case if KMS is down for maintenance or > for some other reason, we want to limit outage only to encrypted data. > In other words, it would be great if KMS delegation toekns would be allocated > lazily - on first request to encrypted data. > This could be a non-default option to lazily allocate KMS delegation tokens, > to improve availability of non-encrypted data. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16123) Lack of protoc
[ https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16123: Component/s: (was: common) build > Lack of protoc > --- > > Key: HADOOP-16123 > URL: https://issues.apache.org/jira/browse/HADOOP-16123 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.0 >Reporter: lqjacklee >Assignee: lqjacklee >Priority: Minor > > During build the source code , do the steps as below : > > 1, run docker daemon > 2, ./start-build-env.sh > 3, sudo mvn clean install -DskipTests -Pnative > the response prompt that : > [ERROR] Failed to execute goal > org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) > on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: > 'protoc --version' did not return a version -> > [Help 1] > However , when execute the command : whereis protoc > liu@a65d187055f9:~/hadoop$ whereis protoc > protoc: /opt/protobuf/bin/protoc > > the PATH value : > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin > > liu@a65d187055f9:~/hadoop$ protoc --version > libprotoc 2.5.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()
[ https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772966#comment-16772966 ] Steve Loughran commented on HADOOP-11223: - bq. Would it simplify things making this class package private and then adding a static method to Configuration to create the unmodifiable object? yes, but we should still add as @Unstable just to warn people that we are evolving this, and in fact the immutability itself may change > Offer a read-only conf alternative to new Configuration() > - > > Key: HADOOP-11223 > URL: https://issues.apache.org/jira/browse/HADOOP-11223 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Reporter: Gopal V >Assignee: Michael Miller >Priority: Major > Labels: Performance > Attachments: HADOOP-11223.001.patch, HADOOP-11223.002.patch, > HADOOP-11223.003.patch > > > new Configuration() is called from several static blocks across Hadoop. > This is incredibly inefficient, since each one of those involves primarily > XML parsing at a point where the JIT won't be triggered & interpreter mode is > essentially forced on the JVM. > The alternate solution would be to offer a {{Configuration::getDefault()}} > alternative which disallows any modifications. > At the very least, such a method would need to be called from > # org.apache.hadoop.io.nativeio.NativeIO::() > # org.apache.hadoop.security.SecurityUtil::() > # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider:: -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16123) Lack of protoc in docker
[ https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772971#comment-16772971 ] Steve Loughran commented on HADOOP-16123: - jack. No. You are going to have to work with the common dev list to get the build together. I'm not accepting issues assigned to me unless its something I've clearly broken. > Lack of protoc in docker > > > Key: HADOOP-16123 > URL: https://issues.apache.org/jira/browse/HADOOP-16123 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.0 >Reporter: lqjacklee >Assignee: lqjacklee >Priority: Minor > > During build the source code , do the steps as below : > > 1, run docker daemon > 2, ./start-build-env.sh > 3, sudo mvn clean install -DskipTests -Pnative > the response prompt that : > [ERROR] Failed to execute goal > org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) > on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: > 'protoc --version' did not return a version -> > [Help 1] > However , when execute the command : whereis protoc > liu@a65d187055f9:~/hadoop$ whereis protoc > protoc: /opt/protobuf/bin/protoc > > the PATH value : > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin > > liu@a65d187055f9:~/hadoop$ protoc --version > libprotoc 2.5.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16123) Lack of protoc
[ https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-16123: --- Assignee: lqjacklee (was: Steve Loughran) > Lack of protoc > --- > > Key: HADOOP-16123 > URL: https://issues.apache.org/jira/browse/HADOOP-16123 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.3.0 >Reporter: lqjacklee >Assignee: lqjacklee >Priority: Minor > > During build the source code , do the steps as below : > > 1, run docker daemon > 2, ./start-build-env.sh > 3, sudo mvn clean install -DskipTests -Pnative > the response prompt that : > [ERROR] Failed to execute goal > org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) > on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: > 'protoc --version' did not return a version -> > [Help 1] > However , when execute the command : whereis protoc > liu@a65d187055f9:~/hadoop$ whereis protoc > protoc: /opt/protobuf/bin/protoc > > the PATH value : > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin > > liu@a65d187055f9:~/hadoop$ protoc --version > libprotoc 2.5.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16123) Lack of protoc in docker
[ https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16123: Summary: Lack of protoc in docker (was: Lack of protoc ) > Lack of protoc in docker > > > Key: HADOOP-16123 > URL: https://issues.apache.org/jira/browse/HADOOP-16123 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.0 >Reporter: lqjacklee >Assignee: lqjacklee >Priority: Minor > > During build the source code , do the steps as below : > > 1, run docker daemon > 2, ./start-build-env.sh > 3, sudo mvn clean install -DskipTests -Pnative > the response prompt that : > [ERROR] Failed to execute goal > org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) > on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: > 'protoc --version' did not return a version -> > [Help 1] > However , when execute the command : whereis protoc > liu@a65d187055f9:~/hadoop$ whereis protoc > protoc: /opt/protobuf/bin/protoc > > the PATH value : > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin > > liu@a65d187055f9:~/hadoop$ protoc --version > libprotoc 2.5.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files
[ https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772965#comment-16772965 ] Steve Loughran commented on HADOOP-15625: - third party stores are always somewhat trouble: if you look @ the tests we always provide a way to turn off things (encryption, sessions, tokens, &c) we assume aren't there. you tend to get some different kinds of store * Full AWS compatible by people who put in the effort to help test (Kudos to WDC here) * ones which are fairly complete, but for which there are the odd corner case (if modified since etc). A good one: some don't handle a GET of content length 0-0 on a zero byte file, which we naively ask for on a 0 byte file. We should fix that our side for performance alone. * work in progress ones which should be using our client as part of their test suite (example: Ozone's S3 adapter, which is still rounding out stuff like multipart uploads) I think we should actually have a configurable policy here for versioning with a property like fs.s3a.change.detection * {{none}}: no checks. Turn on if either it's 3rd party or some problem with it is surfacing * {{etag}} server side with if-modified since * {{client}} client etag * {{warn}} client etag with a warning over a failure then if versioning support is added, a new option could be added. Yes, this complicates testing. Imagine in a new parameterized test which would be skipped entirely if the base test configuration for the test store was set to "none" looking at {{ITestS3AMiscOperations}} we have a test there (testChecksumLengthPastEOF) which relies on the checksum being non-null, hence the etag. And nobody has complained *yet*. But we do have those checksums disabled by default as it broke distcp (HADOOP-15297) so it may not have yet surfaced in the wild. It's actually that if-modified-since check which I worry about, because even though it's part of the HTTP spec, I can imagine some S3 implementation not doing it. A client-side variant would allow the checks but softly The other reason I like "warn" is it how we could downgrade handling of inconsistencies with s3guard to logging issues, but not failing. > S3A input stream to use etags to detect changed source files > > > Key: HADOOP-15625 > URL: https://issues.apache.org/jira/browse/HADOOP-15625 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, > HADOOP-15625-003.patch > > > S3A input stream doesn't handle changing source files any better than the > other cloud store connectors. Specifically: it doesn't noticed it has > changed, caches the length from startup, and whenever a seek triggers a new > GET, you may get one of: old data, new data, and even perhaps go from new > data to old data due to eventual consistency. > We can't do anything to stop this, but we could detect changes by > # caching the etag of the first HEAD/GET (we don't get that HEAD on open with > S3Guard, BTW) > # on future GET requests, verify the etag of the response > # raise an IOE if the remote file changed during the read. > It's a more dramatic failure, but it stops changes silently corrupting things. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772956#comment-16772956 ] Gabor Bota commented on HADOOP-15999: - patch v3: run against Ireland successful added the docs to md fixed style and other minor issues > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL: https://issues.apache.org/jira/browse/HADOOP-15999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, > HADOOP-15999.003.patch, out-of-band-operations.patch > > > S3Guard was initially done on the premise that a new MetadataStore would be > the source of truth, and that it wouldn't provide guarantees if updates were > done without using S3Guard. > I've been seeing increased demand for better support for scenarios where > operations are done on the data that can't reasonably be done with S3Guard > involved. For example: > * A file is deleted using S3Guard, and replaced by some other tool. S3Guard > can't tell the difference between the new file and delete / list > inconsistency and continues to treat the file as deleted. > * An S3Guard-ed file is overwritten by a longer file by some other tool. When > reading the file, only the length of the original file is read. > We could possibly have smarter behavior here by querying both S3 and the > MetadataStore (even in cases where we may currently only query the > MetadataStore in getFileStatus) and use whichever one has the higher modified > time. > This kills the performance boost we currently get in some workloads with the > short-circuited getFileStatus, but we could keep it with authoritative mode > which should give a larger performance boost. At least we'd get more > correctness without authoritative mode and a clear declaration of when we can > make the assumptions required to short-circuit the process. If we can't > consider S3Guard the source of truth, we need to defer to S3 more. > We'd need to be extra sure of any locality / time zone issues if we start > relying on mod_time more directly, but currently we're tracking the > modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-15999: Status: Patch Available (was: Open) > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL: https://issues.apache.org/jira/browse/HADOOP-15999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, > HADOOP-15999.003.patch, out-of-band-operations.patch > > > S3Guard was initially done on the premise that a new MetadataStore would be > the source of truth, and that it wouldn't provide guarantees if updates were > done without using S3Guard. > I've been seeing increased demand for better support for scenarios where > operations are done on the data that can't reasonably be done with S3Guard > involved. For example: > * A file is deleted using S3Guard, and replaced by some other tool. S3Guard > can't tell the difference between the new file and delete / list > inconsistency and continues to treat the file as deleted. > * An S3Guard-ed file is overwritten by a longer file by some other tool. When > reading the file, only the length of the original file is read. > We could possibly have smarter behavior here by querying both S3 and the > MetadataStore (even in cases where we may currently only query the > MetadataStore in getFileStatus) and use whichever one has the higher modified > time. > This kills the performance boost we currently get in some workloads with the > short-circuited getFileStatus, but we could keep it with authoritative mode > which should give a larger performance boost. At least we'd get more > correctness without authoritative mode and a clear declaration of when we can > make the assumptions required to short-circuit the process. If we can't > consider S3Guard the source of truth, we need to defer to S3 more. > We'd need to be extra sure of any locality / time zone issues if we start > relying on mod_time more directly, but currently we're tracking the > modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772948#comment-16772948 ] Steve Loughran commented on HADOOP-16127: - Nice to see some new uses of the java 8 l-expressions; good to use them where possible. Client L475: include the toString value of the caught IOE in the new one, so if the full stack trace is lost, the root cause is preserved L1360: you've removed all sleeps here entirely. Is that OK? +1 pending the change and the confirmation > In ipc.Client, put a new connection could happen after stop > --- > > Key: HADOOP-16127 > URL: https://issues.apache.org/jira/browse/HADOOP-16127 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16127_20190219.patch > > > In getConnection(..), running can be initially true but becomes false before > putIfAbsent. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus
[ https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772952#comment-16772952 ] Steve Loughran commented on HADOOP-15920: - {code} ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java:207: assertTrue("The available should be zero",instream.available() >= 0);:46: ',' is not followed by whitespace. [WhitespaceAfter] ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java:614: assertTrue("Data available in " + instream, inputStream.available() >0 );:76: ')' is preceded with whitespace. [ParenPad] ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java:597: int availableSize = this.wrappedStream == null ? 0 : this.wrappedStream.available();: Line is longer than 80 characters (found 88). [LineLength] {code} > get patch for S3a nextReadPos(), through Yetus > -- > > Key: HADOOP-15920 > URL: https://issues.apache.org/jira/browse/HADOOP-15920 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.1 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Major > Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, > HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772949#comment-16772949 ] Steve Loughran commented on HADOOP-16127: - ps: set your version info for where you intend to apply this > In ipc.Client, put a new connection could happen after stop > --- > > Key: HADOOP-16127 > URL: https://issues.apache.org/jira/browse/HADOOP-16127 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16127_20190219.patch > > > In getConnection(..), running can be initially true but becomes false before > putIfAbsent. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections
[ https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772939#comment-16772939 ] Steve Loughran commented on HADOOP-16126: - It is closer to a busy wait here, but assuming we've moved to a many-core world, having one thread busy doesn't stop the rest of the CPUs playing. * Why the choice of 10ms? * Can you tease this out as a private constant in the IPC file, just so it's less hidden deep in the code. > ipc.Client.stop() may sleep too long to wait for all connections > > > Key: HADOOP-16126 > URL: https://issues.apache.org/jira/browse/HADOOP-16126 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Attachments: c16126_20190219.patch > > > {code} > //Client.java > public void stop() { > ... > // wait until all connections are closed > while (!connections.isEmpty()) { > try { > Thread.sleep(100); > } catch (InterruptedException e) { > } > } > ... > } > {code} > In the code above, the sleep time is 100ms. We found that simply changing > the sleep time to 10ms could improve a Hive job running time by 10x. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16085) S3Guard: use object version or etags to protect against inconsistent read after replace/overwrite
[ https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16085: Summary: S3Guard: use object version or etags to protect against inconsistent read after replace/overwrite (was: S3Guard: use object version to protect against inconsistent read after replace/overwrite) > S3Guard: use object version or etags to protect against inconsistent read > after replace/overwrite > - > > Key: HADOOP-16085 > URL: https://issues.apache.org/jira/browse/HADOOP-16085 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Ben Roling >Priority: Major > Attachments: HADOOP-16085_002.patch, HADOOP-16085_3.2.0_001.patch > > > Currently S3Guard doesn't track S3 object versions. If a file is written in > S3A with S3Guard and then subsequently overwritten, there is no protection > against the next reader seeing the old version of the file instead of the new > one. > It seems like the S3Guard metadata could track the S3 object version. When a > file is created or updated, the object version could be written to the > S3Guard metadata. When a file is read, the read out of S3 could be performed > by object version, ensuring the correct version is retrieved. > I don't have a lot of direct experience with this yet, but this is my > impression from looking through the code. My organization is looking to > shift some datasets stored in HDFS over to S3 and is concerned about this > potential issue as there are some cases in our codebase that would do an > overwrite. > I imagine this idea may have been considered before but I couldn't quite > track down any JIRAs discussing it. If there is one, feel free to close this > with a reference to it. > Am I understanding things correctly? Is this idea feasible? Any feedback > that could be provided would be appreciated. We may consider crafting a > patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] elek closed pull request #491: HDDS-1116. Add java profiler servlet to the Ozone web servers
elek closed pull request #491: HDDS-1116. Add java profiler servlet to the Ozone web servers URL: https://github.com/apache/hadoop/pull/491 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] elek commented on a change in pull request #491: HDDS-1116. Add java profiler servlet to the Ozone web servers
elek commented on a change in pull request #491: HDDS-1116. Add java profiler servlet to the Ozone web servers URL: https://github.com/apache/hadoop/pull/491#discussion_r258444609 ## File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml ## @@ -18,6 +18,7 @@ version: "3" services: datanode: image: apache/hadoop-runner + privileged: true #required by the profiler Review comment: Thanks the comment. I think we can assume that the kernel parameters are adjusted. I will test it without the privileged flag and remove those lines.. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-15999: Attachment: HADOOP-15999.003.patch > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL: https://issues.apache.org/jira/browse/HADOOP-15999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, > HADOOP-15999.003.patch, out-of-band-operations.patch > > > S3Guard was initially done on the premise that a new MetadataStore would be > the source of truth, and that it wouldn't provide guarantees if updates were > done without using S3Guard. > I've been seeing increased demand for better support for scenarios where > operations are done on the data that can't reasonably be done with S3Guard > involved. For example: > * A file is deleted using S3Guard, and replaced by some other tool. S3Guard > can't tell the difference between the new file and delete / list > inconsistency and continues to treat the file as deleted. > * An S3Guard-ed file is overwritten by a longer file by some other tool. When > reading the file, only the length of the original file is read. > We could possibly have smarter behavior here by querying both S3 and the > MetadataStore (even in cases where we may currently only query the > MetadataStore in getFileStatus) and use whichever one has the higher modified > time. > This kills the performance boost we currently get in some workloads with the > short-circuited getFileStatus, but we could keep it with authoritative mode > which should give a larger performance boost. At least we'd get more > correctness without authoritative mode and a clear declaration of when we can > make the assumptions required to short-circuit the process. If we can't > consider S3Guard the source of truth, we need to defer to S3 more. > We'd need to be extra sure of any locality / time zone issues if we start > relying on mod_time more directly, but currently we're tracking the > modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15999) [s3a] Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-15999: Status: Open (was: Patch Available) > [s3a] Better support for out-of-band operations > --- > > Key: HADOOP-15999 > URL: https://issues.apache.org/jira/browse/HADOOP-15999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, > HADOOP-15999.003.patch, out-of-band-operations.patch > > > S3Guard was initially done on the premise that a new MetadataStore would be > the source of truth, and that it wouldn't provide guarantees if updates were > done without using S3Guard. > I've been seeing increased demand for better support for scenarios where > operations are done on the data that can't reasonably be done with S3Guard > involved. For example: > * A file is deleted using S3Guard, and replaced by some other tool. S3Guard > can't tell the difference between the new file and delete / list > inconsistency and continues to treat the file as deleted. > * An S3Guard-ed file is overwritten by a longer file by some other tool. When > reading the file, only the length of the original file is read. > We could possibly have smarter behavior here by querying both S3 and the > MetadataStore (even in cases where we may currently only query the > MetadataStore in getFileStatus) and use whichever one has the higher modified > time. > This kills the performance boost we currently get in some workloads with the > short-circuited getFileStatus, but we could keep it with authoritative mode > which should give a larger performance boost. At least we'd get more > correctness without authoritative mode and a clear declaration of when we can > make the assumptions required to short-circuit the process. If we can't > consider S3Guard the source of truth, we need to defer to S3 more. > We'd need to be extra sure of any locality / time zone issues if we start > relying on mod_time more directly, but currently we're tracking the > modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST
[ https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuachao updated HADOOP-16069: --- Comment: was deleted (was: [~ste...@apache.org] thanks a lot if have a review) > Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in > ZKDelegationTokenSecretManager using principal with Schema /_HOST > > > Key: HADOOP-16069 > URL: https://issues.apache.org/jira/browse/HADOOP-16069 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.1.0 >Reporter: luhuachao >Assignee: luhuachao >Priority: Minor > Labels: kerberos > Attachments: HADOOP-16069.001.patch > > > when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure > ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we > have to use principal like 'nn/hostn...@example.com' . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15685) Build fails (hadoop pipes) on newer Linux envs (like Fedora 28)
[ https://issues.apache.org/jira/browse/HADOOP-15685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772837#comment-16772837 ] tomwang commented on HADOOP-15685: -- {{A simplified error message could be:}} {{[WARNING] /home/hadoop/hadoop-tools/hadoop-pipes/src/main/native/utils/impl/SerialUtils.cc:22:10: fatal error: rpc/types.h: No such file or directory}} A link for a reported build issue on stackoverflow: [https://stackoverflow.com/questions/51479299/build-hadoop-3-0-3-on-fedora-28-problems-with-rpc-library/54783231#54783231] A correct fix should be test whether rpc/types.h exists, and use a variable name to conditionally add tirpc path into include paths/libraries. An example can be found in other hadoop modules' CMakefile > Build fails (hadoop pipes) on newer Linux envs (like Fedora 28) > --- > > Key: HADOOP-15685 > URL: https://issues.apache.org/jira/browse/HADOOP-15685 > Project: Hadoop Common > Issue Type: Improvement > Components: build, tools/pipes >Affects Versions: 3.2.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Attachments: 15685-3.2.0.txt, 15685-example.txt > > > The rpc/types.h and similar includes are no longer part of glibc. > Instead tirpc needs to be used now on those systems. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16112) Delete the baseTrashPath's subDir leads to don't modify baseTrashPath
[ https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772831#comment-16772831 ] Hadoop QA commented on HADOOP-16112: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 53s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 3s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 28s{color} | {color:orange} root: The patch generated 4 new + 9 unchanged - 0 fixed = 13 total (was 9) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 3s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 57s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}219m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16112 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12959369/HADOOP-16112.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f417afbc7622 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1d30fd9 | | maven | version: Apache Maven 3.3
[jira] [Updated] (HADOOP-15958) Revisiting LICENSE and NOTICE files
[ https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15958: --- Attachment: HADOOP-15958-002.patch > Revisiting LICENSE and NOTICE files > --- > > Key: HADOOP-15958 > URL: https://issues.apache.org/jira/browse/HADOOP-15958 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Critical > Attachments: HADOOP-15958-002.patch, HADOOP-15958-wip.001.patch > > > Originally reported from [~jmclean]: > * NOTICE file incorrectly lists copyrights that shouldn't be there and > mentions licenses such as MIT, BSD, and public domain that should be > mentioned in LICENSE only. > * It's better to have a separate LICENSE and NOTICE for the source and binary > releases. > http://www.apache.org/dev/licensing-howto.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org