[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17362456#comment-17362456 ] Hadoop QA commented on HADOOP-15327: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 25s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 18s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 15s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 39s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 20s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 2s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 58s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 43s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 32m 17s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 28s{color} | {color:blue}{color} | {color:blue} branch/hadoop-project no spotbugs output file (spotbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 11s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 57s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 57s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 19s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 19s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 57s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/199/artifact/out/diff-checkstyle-root.txt{color} | {color:orange} root: The patch generated 93 new + 83 unchanged - 7 fixed = 176 total (was 90) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 55s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/199/artifact/out
[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17362368#comment-17362368 ] Szilard Nemeth commented on HADOOP-15327: - *Remaining TODO items that I can make progress with:* - Fix failing unit tests - Testing on cluster > Upgrade MR ShuffleHandler to use Netty4 > --- > > Key: HADOOP-15327 > URL: https://issues.apache.org/jira/browse/HADOOP-15327 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Szilard Nemeth >Priority: Major > Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, > HADOOP-15327.003.patch, > getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log > > > This way, we can remove the dependencies on the netty3 (jboss.netty) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated HADOOP-15327: Attachment: HADOOP-15327.003.patch > Upgrade MR ShuffleHandler to use Netty4 > --- > > Key: HADOOP-15327 > URL: https://issues.apache.org/jira/browse/HADOOP-15327 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Szilard Nemeth >Priority: Major > Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, > HADOOP-15327.003.patch, > getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log > > > This way, we can remove the dependencies on the netty3 (jboss.netty) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17362367#comment-17362367 ] Szilard Nemeth commented on HADOOP-15327: - The latest patch contains commits from this branch: [https://github.com/szilard-nemeth/hadoop/commits/HADOOP-15327-snemeth] There are a couple of commits so I would approach this by explaning the reasons behind each change in the commits. Not all commits are listed, I left out a few trivial ones. Unfortunately, this task was a bit tricky as everytime I touched something in the test, I just found another bug or weird behaviour so it took a great deal of time to solve and discover everything. *1. ShuffleHandler: ch.isOpen() --> ch.isActive(): [https://github.com/szilard-nemeth/hadoop/commit/e703adb57f66da8579baa26257ca9aaed2bf1db5]* This was already mentioned with my previous lenghtier comment. *2. TestShuffleHandler: Fix mocking in testSendMapCount + replace ch.write() with ch.writeAndFlush(): [https://github.com/szilard-nemeth/hadoop/commit/07fbfee5cae85e8e374b53c303e794c19c620efc]* This is about 2 things: - Replacing channel.write calls with channel.writeAndFlush - Fixing bad mocking in org.apache.hadoop.mapred.TestShuffleHandler#testSendMapCount *3. TestShuffleHandler.testMaxConnections: Rewrite test + production code: accepted connection handling: [https://github.com/szilard-nemeth/hadoop/commit/def0059982ef8f0e2f19d385b1a1fcdca8639f9d]* *Changes in production code:* - ShuffleHandler#channelActive added the channel to the channel group (field called 'accepted') before the if statement that enforces the maximum number of open connections. This was the old, wrong piece of code: {code:java} super.channelActive(ctx); LOG.debug("accepted connections={}", accepted.size()); if ((maxShuffleConnections > 0) && (accepted.size() >= maxShuffleConnections)) { {code} - Also, counting the number of open channels with the channel group was unreliable so I introduced a new AtomicInteger field called 'acceptedConnections' to track the open channels / connections. - There was another issue: When the channels were accepted, the counter of open channels was increased but when channels were inactivated I could not see any code that would have maintained (decremented) the value. This was mitigated by adding org.apache.hadoop.mapred.ShuffleHandler.Shuffle#channelInactive that logs the channel inactivated event and decreases the open connections counter: {code:java} @Override public void channelInactive(ChannelHandlerContext ctx) throws Exception { super.channelInactive(ctx); acceptedConnections.decrementAndGet(); LOG.debug("New value of Accepted number of connections={}", acceptedConnections.get()); } {code} *Changes in test code:* - org.apache.hadoop.mapred.TestShuffleHandler#testMaxConnections: Fixed the testcase, the issue was pointed out correctly by [~weichiu] : The connections are accepted in parallel so we should not rely on their order in the test. The way I rewritten this is that I introduced a map to group HttpURLConnection objects by their HTTP response code. Then I check if we only have 200 OK and 429 TOO MANY REQUESTS, and check if the number of 200 OK connections is 2 and there's only one unaccepted connection. *4. increase netty version to 4.1.65.Final: [https://github.com/szilard-nemeth/hadoop/commit/4f4589063b579a93389b1e188c29bd895ae507fc]* This is a simple commit to increase the Netty version to the latest stable 4.x version. See this page: [https://netty.io/downloads.html] It states: "netty-4.1.65.Final.tar.gz ‐ 19-May-2021 (Stable, Recommended)" *5. ShuffleHandler: Fix keepalive test + writing HTTP response properly to channel: [https://github.com/szilard-nemeth/hadoop/commit/1aad4eaace28cfff4a9a9152f7535d70cc6e3734]* This is where things get more interesting. There was a testcase called org.apache.hadoop.mapred.TestShuffleHandler#testKeepAlive that caught an issue that came up because Netty 4.x handles HTTP responses written to the same channel differently than Netty 3.x. See details below. Production code changes: - Added some logs to be able to track what happened when utilizing HTTP Connection Keep-alive. - Added a ChannelOutboundHandlerAdapter that handles exceptions that happens during outbound message construction. This is by default not logged by Netty and I only found this trick to catch these events: {code:java} pipeline.addLast("outboundExcHandler", new ChannelOutboundHandlerAdapter() { @Override public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { promise.addListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE); super.write(ctx, msg, promise); } }); {code} This solution is described here: //[https://stackoverflow
[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated HADOOP-15327: Attachment: getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log > Upgrade MR ShuffleHandler to use Netty4 > --- > > Key: HADOOP-15327 > URL: https://issues.apache.org/jira/browse/HADOOP-15327 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Szilard Nemeth >Priority: Major > Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, > getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log > > > This way, we can remove the dependencies on the netty3 (jboss.netty) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17547) Magic committer to downgrade abort in cleanup if list uploads fails with access denied
[ https://issues.apache.org/jira/browse/HADOOP-17547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17547: Fix Version/s: 3.3.2 > Magic committer to downgrade abort in cleanup if list uploads fails with > access denied > -- > > Key: HADOOP-17547 > URL: https://issues.apache.org/jira/browse/HADOOP-17547 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Bogdan Stolojan >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 20m > Remaining Estimate: 0h > > If the caller doesn't have "s3:ListBucketMultipartUploads" permissions on a > bucket, then magic committer cleanup fails. > {code} > at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:247) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:112) > at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:315) > at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:311) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:286) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.listMultipartUploads(S3AFileSystem.java:4549) > at > org.apache.hadoop.fs.s3a.commit.CommitOperations.listPendingUploadsUnderPath(CommitOperations.java:361) > at > org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.abortPendingUploadsInCleanup(AbstractS3ACommitter.java:671) > at > org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.cleanup(AbstractS3ACommitter.java:770) > {code} > it should just swallow this, given it's best effort -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17547) Magic committer to downgrade abort in cleanup if list uploads fails with access denied
[ https://issues.apache.org/jira/browse/HADOOP-17547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17547. - Resolution: Fixed fixed by downgrading to log at debug > Magic committer to downgrade abort in cleanup if list uploads fails with > access denied > -- > > Key: HADOOP-17547 > URL: https://issues.apache.org/jira/browse/HADOOP-17547 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Bogdan Stolojan >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 20m > Remaining Estimate: 0h > > If the caller doesn't have "s3:ListBucketMultipartUploads" permissions on a > bucket, then magic committer cleanup fails. > {code} > at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:247) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:112) > at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:315) > at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:311) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:286) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.listMultipartUploads(S3AFileSystem.java:4549) > at > org.apache.hadoop.fs.s3a.commit.CommitOperations.listPendingUploadsUnderPath(CommitOperations.java:361) > at > org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.abortPendingUploadsInCleanup(AbstractS3ACommitter.java:671) > at > org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.cleanup(AbstractS3ACommitter.java:770) > {code} > it should just swallow this, given it's best effort -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17547) Magic committer to downgrade abort in cleanup if list uploads fails with access denied
[ https://issues.apache.org/jira/browse/HADOOP-17547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-17547: --- Assignee: Bogdan Stolojan > Magic committer to downgrade abort in cleanup if list uploads fails with > access denied > -- > > Key: HADOOP-17547 > URL: https://issues.apache.org/jira/browse/HADOOP-17547 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Bogdan Stolojan >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > If the caller doesn't have "s3:ListBucketMultipartUploads" permissions on a > bucket, then magic committer cleanup fails. > {code} > at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:247) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:112) > at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:315) > at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:311) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:286) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.listMultipartUploads(S3AFileSystem.java:4549) > at > org.apache.hadoop.fs.s3a.commit.CommitOperations.listPendingUploadsUnderPath(CommitOperations.java:361) > at > org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.abortPendingUploadsInCleanup(AbstractS3ACommitter.java:671) > at > org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.cleanup(AbstractS3ACommitter.java:770) > {code} > it should just swallow this, given it's best effort -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17362339#comment-17362339 ] Hadoop QA commented on HADOOP-15327: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 22s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 12m 32s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 24s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 40s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 13s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 1s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 59s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 51s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 32m 22s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 31s{color} | {color:blue}{color} | {color:blue} branch/hadoop-project no spotbugs output file (spotbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 13s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 6s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 6s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 19s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 19s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 58s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/197/artifact/out/diff-checkstyle-root.txt{color} | {color:orange} root: The patch generated 95 new + 83 unchanged - 7 fixed = 178 total (was 90) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/197/artifact/out
[jira] [Updated] (HADOOP-17643) WASB : Make metadata checks case insensitive
[ https://issues.apache.org/jira/browse/HADOOP-17643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HADOOP-17643: Fix Version/s: 3.4.0 > WASB : Make metadata checks case insensitive > > > Key: HADOOP-17643 > URL: https://issues.apache.org/jira/browse/HADOOP-17643 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > WASB driver uses meta data on blobs to denote permission, whether its a place > holder 0 sized blob for dir etc. > For storage migration users uses Azcopy, it copies the blobs but will cause > the metadata keys to get changed to camel case. As per discussion with MSFT > Azcopy team, this is a known issue and technical limitation. This is what > Azcopy team explained > "For context, blob metadata is implemented with HTTP headers. They are case > insensitive but case preserving. > There is a known issue with the Go language. The HTTP client that it provides > does this case modification to the response headers before we can read the > raw values, so the destination metadata keys have a different casing than the > source. We’ve reached out to the Go Team in the past but weren’t successful > in convincing them to change the behaviour. We don’t have a short term > solution right now" > So propose to change the metadata key checks to do case insensitive checks. > May be make case insensitive check configurable with defaults to false for > compatibility. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated HADOOP-15327: Attachment: HADOOP-15327.002.patch > Upgrade MR ShuffleHandler to use Netty4 > --- > > Key: HADOOP-15327 > URL: https://issues.apache.org/jira/browse/HADOOP-15327 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Szilard Nemeth >Priority: Major > Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch > > > This way, we can remove the dependencies on the netty3 (jboss.netty) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated HADOOP-15327: Attachment: HADOOP-15327-snemeth.002.patch > Upgrade MR ShuffleHandler to use Netty4 > --- > > Key: HADOOP-15327 > URL: https://issues.apache.org/jira/browse/HADOOP-15327 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Szilard Nemeth >Priority: Major > Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch > > > This way, we can remove the dependencies on the netty3 (jboss.netty) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated HADOOP-15327: Attachment: (was: HADOOP-15327-snemeth.002.patch) > Upgrade MR ShuffleHandler to use Netty4 > --- > > Key: HADOOP-15327 > URL: https://issues.apache.org/jira/browse/HADOOP-15327 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Szilard Nemeth >Priority: Major > Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch > > > This way, we can remove the dependencies on the netty3 (jboss.netty) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org