[jira] [Commented] (HDFS-14811) RBF: TestRouterRpc#testErasureCoding is flaky
[ https://issues.apache.org/jira/browse/HDFS-14811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198891#comment-17198891 ] Hadoop QA commented on HDFS-14811: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 25s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 5s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings.
[jira] [Commented] (HDFS-14811) RBF: TestRouterRpc#testErasureCoding is flaky
[ https://issues.apache.org/jira/browse/HDFS-14811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198878#comment-17198878 ] Ayush Saxena commented on HDFS-14811: - I am seeing this quite a number of times failing, though we now have HDFS-12288. >From yesterday pre-commit build : https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/190/testReport/org.apache.hadoop.hdfs.server.federation.router/TestRouterRpc/testErasureCoding/ The reason again seems to be the same. {noformat} 2020-09-19 13:11:38,422 [IPC Server handler 9 on default port 36277] INFO blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseRandom(891)) - Not enough replicas was chosen. Reason: {NODE_TOO_BUSY=1} {noformat} Which can be solved by the v2 patch. The fix is just disabling {{Overload}} check, which shall not effect any test adversely, so should be safe. [~elgoiri] any objections moving ahead with v2? > RBF: TestRouterRpc#testErasureCoding is flaky > - > > Key: HDFS-14811 > URL: https://issues.apache.org/jira/browse/HDFS-14811 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chen Zhang >Assignee: Chen Zhang >Priority: Major > Attachments: HDFS-14811.001.patch, HDFS-14811.002.patch > > > The Failed reason: > {code:java} > 2019-09-01 18:19:20,940 [IPC Server handler 5 on default port 53140] INFO > blockmanagement.BlockPlacementPolicy > (BlockPlacementPolicyDefault.java:chooseRandom(838)) - [ > Node /default-rack/127.0.0.1:53148 [ > ] > Node /default-rack/127.0.0.1:53161 [ > ] > Node /default-rack/127.0.0.1:53157 [ > Datanode 127.0.0.1:53157 is not chosen since the node is too busy (load: 3 > > 2.6665). > Node /default-rack/127.0.0.1:53143 [ > ] > Node /default-rack/127.0.0.1:53165 [ > ] > 2019-09-01 18:19:20,940 [IPC Server handler 5 on default port 53140] INFO > blockmanagement.BlockPlacementPolicy > (BlockPlacementPolicyDefault.java:chooseRandom(846)) - Not enough replicas > was chosen. Reason: {NODE_TOO_BUSY=1} > 2019-09-01 18:19:20,941 [IPC Server handler 5 on default port 53140] WARN > blockmanagement.BlockPlacementPolicy > (BlockPlacementPolicyDefault.java:chooseTarget(449)) - Failed to place enough > replicas, still in need of 1 to reach 6 (unavailableStorages=[], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) > 2019-09-01 18:19:20,941 [IPC Server handler 5 on default port 53140] WARN > protocol.BlockStoragePolicy (BlockStoragePolicy.java:chooseStorageTypes(161)) > - Failed to place enough replicas: expected size is 1 but only 0 storage > types can be selected (replication=6, selected=[], unavailable=[DISK], > removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) > 2019-09-01 18:19:20,941 [IPC Server handler 5 on default port 53140] WARN > blockmanagement.BlockPlacementPolicy > (BlockPlacementPolicyDefault.java:chooseTarget(449)) - Failed to place enough > replicas, still in need of 1 to reach 6 (unavailableStorages=[DISK], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All > required storage types are unavailable: unavailableStorages=[DISK], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]} > 2019-09-01 18:19:20,941 [IPC Server handler 5 on default port 53140] INFO > ipc.Server (Server.java:logException(2982)) - IPC Server handler 5 on default > port 53140, call Call#1270 Retry#0 > org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 127.0.0.1:53202 > java.io.IOException: File /testec/testfile2 could only be written to 5 of the > 6 required nodes for RS-6-3-1024k. There are 6 datanode(s) running and 6 > node(s) are excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:) > at > org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2815) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:893) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
[jira] [Updated] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-15579: Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thanx [~Symious] for the contribution and [~elgoiri] for the review!!! > RBF: The constructor of PathLocation may got some misunderstanding > -- > > Key: HDFS-15579 > URL: https://issues.apache.org/jira/browse/HDFS-15579 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Minor > Fix For: 3.4.0 > > Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, > HDFS-15579-003.patch, HDFS-15579-004.patch, HDFS-15579-005.patch > > > There is a constructor of PathLocation as follows, it's for creating a new > PathLocation with a prioritised nsId. > > {code:java} > public PathLocation(PathLocation other, String firstNsId) { > this.sourcePath = other.sourcePath; > this.destOrder = other.destOrder; > this.destinations = orderedNamespaces(other.destinations, firstNsId); > } > {code} > When I was reading the code of MultipleDestinationMountTableResolver, I > thought this constructor was to create a PathLocation with an override > destination. It took me a while before I realize this is a constructor to > sort the destinations inside. > Maybe I think this constructor can be more clear about its usage? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15569) Speed up the Storage#doRecover during datanode rolling upgrade
[ https://issues.apache.org/jira/browse/HDFS-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198849#comment-17198849 ] Wei-Chiu Chuang commented on HDFS-15569: Nit: {code} LOG.info("Deleting storage directory {} from previous upgrade", rootPath); {code} Better be a LOG.warn(). Also, the message looks misleading. It should say something like "deleting storage ... failed" and why log rootPath instead of the curTmp? The thread would take a while to complete. It would be great to set a meaningful name of the thread. > Speed up the Storage#doRecover during datanode rolling upgrade > --- > > Key: HDFS-15569 > URL: https://issues.apache.org/jira/browse/HDFS-15569 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HDFS-15569.001.patch, HDFS-15569.002.patch > > > When upgrading datanode from hadoop 2.7.2 to 3.1.1 , because of jvm not > having enough memory upgrade failed , Adjusted memory configurations and re > upgraded datanode , > Now datanode upgrade has taken more time , on analyzing found that > Storage#deleteDir has taken more time in RECOVER_UPGRADE state > {code:java} > "Thread-28" #270 daemon prio=5 os_prio=0 tid=0x7fed5a9b8000 nid=0x2b5c > runnable [0x7fdcdad2a000]"Thread-28" #270 daemon prio=5 os_prio=0 > tid=0x7fed5a9b8000 nid=0x2b5c runnable [0x7fdcdad2a000] > java.lang.Thread.State: RUNNABLE at java.io.UnixFileSystem.delete0(Native > Method) at java.io.UnixFileSystem.delete(UnixFileSystem.java:265) at > java.io.File.delete(File.java:1041) at > org.apache.hadoop.fs.FileUtil.deleteImpl(FileUtil.java:229) at > org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:270) at > org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at > org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at > org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at > org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at > org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at > org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at > org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at > org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:153) at > org.apache.hadoop.hdfs.server.common.Storage.deleteDir(Storage.java:1348) at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.doRecover(Storage.java:782) > at > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:174) > at > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:224) > at > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:253) > at > org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:455) > at > org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:389) > - locked <0x7fdf08ec7548> (a > org.apache.hadoop.hdfs.server.datanode.DataStorage) at > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1761) > - locked <0x7fdf08ec7598> (a > org.apache.hadoop.hdfs.server.datanode.DataNode) at > org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1697) > at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:392) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822) > at java.lang.Thread.run(Thread.java:748) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198847#comment-17198847 ] Janus Chow commented on HDFS-15579: --- [~elgoiri] [~ayushtkn] Thanks for the review. > RBF: The constructor of PathLocation may got some misunderstanding > -- > > Key: HDFS-15579 > URL: https://issues.apache.org/jira/browse/HDFS-15579 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Minor > Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, > HDFS-15579-003.patch, HDFS-15579-004.patch, HDFS-15579-005.patch > > > There is a constructor of PathLocation as follows, it's for creating a new > PathLocation with a prioritised nsId. > > {code:java} > public PathLocation(PathLocation other, String firstNsId) { > this.sourcePath = other.sourcePath; > this.destOrder = other.destOrder; > this.destinations = orderedNamespaces(other.destinations, firstNsId); > } > {code} > When I was reading the code of MultipleDestinationMountTableResolver, I > thought this constructor was to create a PathLocation with an override > destination. It took me a while before I realize this is a constructor to > sort the destinations inside. > Maybe I think this constructor can be more clear about its usage? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198778#comment-17198778 ] Íñigo Goiri commented on HDFS-15579: +1 on [^HDFS-15579-005.patch]. > RBF: The constructor of PathLocation may got some misunderstanding > -- > > Key: HDFS-15579 > URL: https://issues.apache.org/jira/browse/HDFS-15579 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Minor > Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, > HDFS-15579-003.patch, HDFS-15579-004.patch, HDFS-15579-005.patch > > > There is a constructor of PathLocation as follows, it's for creating a new > PathLocation with a prioritised nsId. > > {code:java} > public PathLocation(PathLocation other, String firstNsId) { > this.sourcePath = other.sourcePath; > this.destOrder = other.destOrder; > this.destinations = orderedNamespaces(other.destinations, firstNsId); > } > {code} > When I was reading the code of MultipleDestinationMountTableResolver, I > thought this constructor was to create a PathLocation with an override > destination. It took me a while before I realize this is a constructor to > sort the destinations inside. > Maybe I think this constructor can be more clear about its usage? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value
[ https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198763#comment-17198763 ] Ayush Saxena commented on HDFS-15442: - The patch doesn't apply, can you rebase > Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative > value > --- > > Key: HDFS-15442 > URL: https://issues.apache.org/jira/browse/HDFS-15442 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: AMC-team >Priority: Major > Attachments: HDFS-15442.000.patch > > > In current implementation of checkpoint image transfer, if the file length is > bigger than the configured value dfs.image.transfer.chunksize, it will use > chunked streaming mode to avoid internal buffering. This mode should be used > only if more than chunkSize data is present to upload, otherwise upload may > not happen sometimes. > {code:java} > //TransferFsImage.java > int chunkSize = (int) conf.getLongBytes( > DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY, > DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT); > if (imageFile.length() > chunkSize) { > // using chunked streaming mode to support upload of 2GB+ files and to > // avoid internal buffering. > // this mode should be used only if more than chunkSize data is present > // to upload. otherwise upload may not happen sometimes. > connection.setChunkedStreamingMode(chunkSize); > } > {code} > There is no check code for this parameter. User may accidentally set this > value to a wrong value. Here, if the user set chunkSize to a negative value. > Chunked streaming mode will always be used. In > setChunkedStreamingMode(chunkSize), there is a correction code that if the > chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE. > {code:java} > public void setChunkedStreamingMode (int chunklen) { > if (connected) { > throw new IllegalStateException ("Can't set streaming mode: already > connected"); > } > if (fixedContentLength != -1 || fixedContentLengthLong != -1) { > throw new IllegalStateException ("Fixed length streaming mode set"); > } > chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen; > } > {code} > However, > *If the user set dfs.image.transfer.chunksize to value that <= 0, even for > images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked > streaming mode and may fail the upload as mentioned above.* *(This scenario > may not be common, but* *we can prevent users setting this param to an > extremely small value.**)* > *How to fix:* > Add checking code or correction code right after parsing the config value > before really use the value (setChunkedStreamingMode). > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198761#comment-17198761 ] Ayush Saxena commented on HDFS-15579: - Test failure is unrelated. v5 LGTM +1 > RBF: The constructor of PathLocation may got some misunderstanding > -- > > Key: HDFS-15579 > URL: https://issues.apache.org/jira/browse/HDFS-15579 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Minor > Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, > HDFS-15579-003.patch, HDFS-15579-004.patch, HDFS-15579-005.patch > > > There is a constructor of PathLocation as follows, it's for creating a new > PathLocation with a prioritised nsId. > > {code:java} > public PathLocation(PathLocation other, String firstNsId) { > this.sourcePath = other.sourcePath; > this.destOrder = other.destOrder; > this.destinations = orderedNamespaces(other.destinations, firstNsId); > } > {code} > When I was reading the code of MultipleDestinationMountTableResolver, I > thought this constructor was to create a PathLocation with an override > destination. It took me a while before I realize this is a constructor to > sort the destinations inside. > Maybe I think this constructor can be more clear about its usage? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198754#comment-17198754 ] Janus Chow commented on HDFS-15579: --- Another random failed unit test, should I create a new patch to trigger the QA test? > RBF: The constructor of PathLocation may got some misunderstanding > -- > > Key: HDFS-15579 > URL: https://issues.apache.org/jira/browse/HDFS-15579 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Minor > Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, > HDFS-15579-003.patch, HDFS-15579-004.patch, HDFS-15579-005.patch > > > There is a constructor of PathLocation as follows, it's for creating a new > PathLocation with a prioritised nsId. > > {code:java} > public PathLocation(PathLocation other, String firstNsId) { > this.sourcePath = other.sourcePath; > this.destOrder = other.destOrder; > this.destinations = orderedNamespaces(other.destinations, firstNsId); > } > {code} > When I was reading the code of MultipleDestinationMountTableResolver, I > thought this constructor was to create a PathLocation with an override > destination. It took me a while before I realize this is a constructor to > sort the destinations inside. > Maybe I think this constructor can be more clear about its usage? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198724#comment-17198724 ] Hadoop QA commented on HDFS-15579: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 17s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 52s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} |
[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS
[ https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198718#comment-17198718 ] Hadoop QA commented on HDFS-15098: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 1s{color} | {color:blue} markdownlint was not available. {color} | | {color:blue}0{color} | {color:blue} buf {color} | {color:blue} 0m 1s{color} | {color:blue} buf was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 36s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 11s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 34s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 17s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 30s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 30s{color} | {color:red} root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 25 new + 138 unchanged - 25 fixed = 163 total (was 163) {color} | | {color:green}+1{color} | {color:green} golang {color} | {color:green} 21m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m 30s{color} | {color:red} root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 1 new + 2053 unchanged - 5 fixed = 2054 total (was 2058) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 19s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 18m 20s{color} | {color:red} root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 11 new + 152 unchanged - 11 fixed = 163 total (was 163) {color} | | {color:green}+1{color} | {color:green} golang {color} | {color:green} 18m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} |
[jira] [Updated] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janus Chow updated HDFS-15579: -- Attachment: HDFS-15579-005.patch > RBF: The constructor of PathLocation may got some misunderstanding > -- > > Key: HDFS-15579 > URL: https://issues.apache.org/jira/browse/HDFS-15579 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Minor > Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, > HDFS-15579-003.patch, HDFS-15579-004.patch, HDFS-15579-005.patch > > > There is a constructor of PathLocation as follows, it's for creating a new > PathLocation with a prioritised nsId. > > {code:java} > public PathLocation(PathLocation other, String firstNsId) { > this.sourcePath = other.sourcePath; > this.destOrder = other.destOrder; > this.destinations = orderedNamespaces(other.destinations, firstNsId); > } > {code} > When I was reading the code of MultipleDestinationMountTableResolver, I > thought this constructor was to create a PathLocation with an override > destination. It took me a while before I realize this is a constructor to > sort the destinations inside. > Maybe I think this constructor can be more clear about its usage? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198709#comment-17198709 ] Hadoop QA commented on HDFS-15579: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 2s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 25s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 19s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} |
[jira] [Updated] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janus Chow updated HDFS-15579: -- Attachment: HDFS-15579-004.patch > RBF: The constructor of PathLocation may got some misunderstanding > -- > > Key: HDFS-15579 > URL: https://issues.apache.org/jira/browse/HDFS-15579 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Minor > Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, > HDFS-15579-003.patch, HDFS-15579-004.patch > > > There is a constructor of PathLocation as follows, it's for creating a new > PathLocation with a prioritised nsId. > > {code:java} > public PathLocation(PathLocation other, String firstNsId) { > this.sourcePath = other.sourcePath; > this.destOrder = other.destOrder; > this.destinations = orderedNamespaces(other.destinations, firstNsId); > } > {code} > When I was reading the code of MultipleDestinationMountTableResolver, I > thought this constructor was to create a PathLocation with an override > destination. It took me a while before I realize this is a constructor to > sort the destinations inside. > Maybe I think this constructor can be more clear about its usage? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding
[ https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198696#comment-17198696 ] Janus Chow commented on HDFS-15579: --- Got it, resolved the checkstyle issues and added a test in "TestMultipleDestinationResolver". > RBF: The constructor of PathLocation may got some misunderstanding > -- > > Key: HDFS-15579 > URL: https://issues.apache.org/jira/browse/HDFS-15579 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Minor > Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, > HDFS-15579-003.patch, HDFS-15579-004.patch > > > There is a constructor of PathLocation as follows, it's for creating a new > PathLocation with a prioritised nsId. > > {code:java} > public PathLocation(PathLocation other, String firstNsId) { > this.sourcePath = other.sourcePath; > this.destOrder = other.destOrder; > this.destinations = orderedNamespaces(other.destinations, firstNsId); > } > {code} > When I was reading the code of MultipleDestinationMountTableResolver, I > thought this constructor was to create a PathLocation with an override > destination. It took me a while before I realize this is a constructor to > sort the destinations inside. > Maybe I think this constructor can be more clear about its usage? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS
[ https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198674#comment-17198674 ] liusheng commented on HDFS-15098: - Hi [~vinayakumarb], Thank for your review, I have updated the patch according to your comments, please help to take look again. thank you! > Add SM4 encryption method for HDFS > -- > > Key: HDFS-15098 > URL: https://issues.apache.org/jira/browse/HDFS-15098 > Project: Hadoop HDFS > Issue Type: New Feature >Affects Versions: 3.4.0 >Reporter: liusheng >Assignee: liusheng >Priority: Major > Labels: sm4 > Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, > HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, > HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, > HDFS-15098.009.patch, image-2020-08-19-16-54-41-341.png > > Time Spent: 20m > Remaining Estimate: 0h > > SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard > for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure). > SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far > been rejected by ISO. One of the reasons for the rejection has been > opposition to the WAPI fast-track proposal by the IEEE. please see: > [https://en.wikipedia.org/wiki/SM4_(cipher)] > > *Use sm4 on hdfs as follows:* > 1.Configure Hadoop KMS > 2.test HDFS sm4 > hadoop key create key1 -cipher 'SM4/CTR/NoPadding' > hdfs dfs -mkdir /benchmarks > hdfs crypto -createZone -keyName key1 -path /benchmarks > *requires:* > 1.openssl version >=1.1.1 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15098) Add SM4 encryption method for HDFS
[ https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liusheng updated HDFS-15098: Attachment: HDFS-15098.009.patch Status: Patch Available (was: Open) > Add SM4 encryption method for HDFS > -- > > Key: HDFS-15098 > URL: https://issues.apache.org/jira/browse/HDFS-15098 > Project: Hadoop HDFS > Issue Type: New Feature >Affects Versions: 3.4.0 >Reporter: liusheng >Assignee: liusheng >Priority: Major > Labels: sm4 > Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, > HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, > HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, > HDFS-15098.009.patch, image-2020-08-19-16-54-41-341.png > > Time Spent: 20m > Remaining Estimate: 0h > > SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard > for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure). > SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far > been rejected by ISO. One of the reasons for the rejection has been > opposition to the WAPI fast-track proposal by the IEEE. please see: > [https://en.wikipedia.org/wiki/SM4_(cipher)] > > *Use sm4 on hdfs as follows:* > 1.Configure Hadoop KMS > 2.test HDFS sm4 > hadoop key create key1 -cipher 'SM4/CTR/NoPadding' > hdfs dfs -mkdir /benchmarks > hdfs crypto -createZone -keyName key1 -path /benchmarks > *requires:* > 1.openssl version >=1.1.1 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15098) Add SM4 encryption method for HDFS
[ https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liusheng updated HDFS-15098: Attachment: (was: HDFS-15098.009.patch) > Add SM4 encryption method for HDFS > -- > > Key: HDFS-15098 > URL: https://issues.apache.org/jira/browse/HDFS-15098 > Project: Hadoop HDFS > Issue Type: New Feature >Affects Versions: 3.4.0 >Reporter: liusheng >Assignee: liusheng >Priority: Major > Labels: sm4 > Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, > HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, > HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, > image-2020-08-19-16-54-41-341.png > > Time Spent: 20m > Remaining Estimate: 0h > > SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard > for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure). > SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far > been rejected by ISO. One of the reasons for the rejection has been > opposition to the WAPI fast-track proposal by the IEEE. please see: > [https://en.wikipedia.org/wiki/SM4_(cipher)] > > *Use sm4 on hdfs as follows:* > 1.Configure Hadoop KMS > 2.test HDFS sm4 > hadoop key create key1 -cipher 'SM4/CTR/NoPadding' > hdfs dfs -mkdir /benchmarks > hdfs crypto -createZone -keyName key1 -path /benchmarks > *requires:* > 1.openssl version >=1.1.1 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15098) Add SM4 encryption method for HDFS
[ https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liusheng updated HDFS-15098: Status: Open (was: Patch Available) > Add SM4 encryption method for HDFS > -- > > Key: HDFS-15098 > URL: https://issues.apache.org/jira/browse/HDFS-15098 > Project: Hadoop HDFS > Issue Type: New Feature >Affects Versions: 3.4.0 >Reporter: liusheng >Assignee: liusheng >Priority: Major > Labels: sm4 > Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, > HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, > HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, > image-2020-08-19-16-54-41-341.png > > Time Spent: 20m > Remaining Estimate: 0h > > SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard > for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure). > SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far > been rejected by ISO. One of the reasons for the rejection has been > opposition to the WAPI fast-track proposal by the IEEE. please see: > [https://en.wikipedia.org/wiki/SM4_(cipher)] > > *Use sm4 on hdfs as follows:* > 1.Configure Hadoop KMS > 2.test HDFS sm4 > hadoop key create key1 -cipher 'SM4/CTR/NoPadding' > hdfs dfs -mkdir /benchmarks > hdfs crypto -createZone -keyName key1 -path /benchmarks > *requires:* > 1.openssl version >=1.1.1 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org