[jira] [Commented] (HDDS-4395) Ozone Data Generator for Fast Scale Test
[ https://issues.apache.org/jira/browse/HDDS-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222341#comment-17222341 ] Wei-Chiu Chuang commented on HDDS-4395: --- The code is currently in my personal repo: https://github.com/jojochuang/hadoop-ozone/tree/containergen Will rebase the code against master and then open a PR later. > Ozone Data Generator for Fast Scale Test > > > Key: HDDS-4395 > URL: https://issues.apache.org/jira/browse/HDDS-4395 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Tools >Affects Versions: 1.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: Ozone Data Generator for Fast Scale Test.pdf > > > I've been working on this fun project and would like to share with the > community. > > h1. Synopsis > We want to prove Ozone runs well at scale, in terms of number of keys > (billions of keys), as well as dense DataNodes where each DN has hundreds of > TB or even PB-scale capacity. > h1. Challenge: Data generation > The challenge is to generate a huge data set fast so that we can benchmark > the system quickly. No existing tool is capable at this scale. > > h1. Proposal: > The major bottleneck is OM’s key insertion performance. In addition, Ozone > uses a single pipeline to write data, unless multi-raft is enabled. > > Instead of using Ozone's client API to generate data, We should write > directly to OM, SCM and DN’s rocksdb. RocksDB can support u[p to a million > key|https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks] bulk > load operations. > > Similarly, we can skip the normal Ozone client write path; populate the > container db and block files directly. > > (more details in the design doc) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-4395) Ozone Data Generator for Fast Scale Test
[ https://issues.apache.org/jira/browse/HDDS-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-4395: -- Attachment: Ozone Data Generator for Fast Scale Test.pdf > Ozone Data Generator for Fast Scale Test > > > Key: HDDS-4395 > URL: https://issues.apache.org/jira/browse/HDDS-4395 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Tools >Affects Versions: 1.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: Ozone Data Generator for Fast Scale Test.pdf > > > I've been working on this fun project and would like to share with the > community. > > h1. Synopsis > We want to prove Ozone runs well at scale, in terms of number of keys > (billions of keys), as well as dense DataNodes where each DN has hundreds of > TB or even PB-scale capacity. > h1. Challenge: Data generation > The challenge is to generate a huge data set fast so that we can benchmark > the system quickly. No existing tool is capable at this scale. > > h1. Proposal: > The major bottleneck is OM’s key insertion performance. In addition, Ozone > uses a single pipeline to write data, unless multi-raft is enabled. > > Instead of using Ozone's client API to generate data, We should write > directly to OM, SCM and DN’s rocksdb. RocksDB can support u[p to a million > key|https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks] bulk > load operations. > > Similarly, we can skip the normal Ozone client write path; populate the > container db and block files directly. > > (more details in the design doc) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-4395) Ozone Data Generator for Fast Scale Test
Wei-Chiu Chuang created HDDS-4395: - Summary: Ozone Data Generator for Fast Scale Test Key: HDDS-4395 URL: https://issues.apache.org/jira/browse/HDDS-4395 Project: Hadoop Distributed Data Store Issue Type: New Feature Components: Tools Affects Versions: 1.0.0 Reporter: Wei-Chiu Chuang I've been working on this fun project and would like to share with the community. h1. Synopsis We want to prove Ozone runs well at scale, in terms of number of keys (billions of keys), as well as dense DataNodes where each DN has hundreds of TB or even PB-scale capacity. h1. Challenge: Data generation The challenge is to generate a huge data set fast so that we can benchmark the system quickly. No existing tool is capable at this scale. h1. Proposal: The major bottleneck is OM’s key insertion performance. In addition, Ozone uses a single pipeline to write data, unless multi-raft is enabled. Instead of using Ozone's client API to generate data, We should write directly to OM, SCM and DN’s rocksdb. RocksDB can support u[p to a million key|https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks] bulk load operations. Similarly, we can skip the normal Ozone client write path; populate the container db and block files directly. (more details in the design doc) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-4395) Ozone Data Generator for Fast Scale Test
[ https://issues.apache.org/jira/browse/HDDS-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HDDS-4395: - Assignee: Wei-Chiu Chuang > Ozone Data Generator for Fast Scale Test > > > Key: HDDS-4395 > URL: https://issues.apache.org/jira/browse/HDDS-4395 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Tools >Affects Versions: 1.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > I've been working on this fun project and would like to share with the > community. > > h1. Synopsis > We want to prove Ozone runs well at scale, in terms of number of keys > (billions of keys), as well as dense DataNodes where each DN has hundreds of > TB or even PB-scale capacity. > h1. Challenge: Data generation > The challenge is to generate a huge data set fast so that we can benchmark > the system quickly. No existing tool is capable at this scale. > > h1. Proposal: > The major bottleneck is OM’s key insertion performance. In addition, Ozone > uses a single pipeline to write data, unless multi-raft is enabled. > > Instead of using Ozone's client API to generate data, We should write > directly to OM, SCM and DN’s rocksdb. RocksDB can support u[p to a million > key|https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks] bulk > load operations. > > Similarly, we can skip the normal Ozone client write path; populate the > container db and block files directly. > > (more details in the design doc) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-4391) UnixPath.toUri() is expensive
Wei-Chiu Chuang created HDDS-4391: - Summary: UnixPath.toUri() is expensive Key: HDDS-4391 URL: https://issues.apache.org/jira/browse/HDDS-4391 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Manager Affects Versions: 1.0.0 Reporter: Wei-Chiu Chuang Attachments: Screen Shot 2020-10-20 at 12.34.52 PM.png OM makes call this API to look up a key. This call accounts for 20% (give or take) of OM request handler overhead. Would be great if we can get rid of this call. !Screen Shot 2020-10-20 at 12.34.52 PM.png! OMClientRequest.java {code:java} @SuppressFBWarnings("DMI_HARDCODED_ABSOLUTE_FILENAME") public static String validateAndNormalizeKey(String keyName) throws OMException { String normalizedKeyName; if (keyName.startsWith(OM_KEY_PREFIX)) { normalizedKeyName = Paths.get(keyName).toUri().normalize().getPath(); } else { normalizedKeyName = Paths.get(OM_KEY_PREFIX, keyName).toUri() .normalize().getPath(); } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-4391) UnixPath.toUri() is expensive
[ https://issues.apache.org/jira/browse/HDDS-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-4391: -- Target Version/s: 1.1.0 > UnixPath.toUri() is expensive > - > > Key: HDDS-4391 > URL: https://issues.apache.org/jira/browse/HDDS-4391 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Manager >Affects Versions: 1.0.0 >Reporter: Wei-Chiu Chuang >Priority: Major > Attachments: Screen Shot 2020-10-20 at 12.34.52 PM.png > > > OM makes call this API to look up a key. > This call accounts for 20% (give or take) of OM request handler overhead. > Would be great if we can get rid of this call. > !Screen Shot 2020-10-20 at 12.34.52 PM.png! > OMClientRequest.java > {code:java} > @SuppressFBWarnings("DMI_HARDCODED_ABSOLUTE_FILENAME") > public static String validateAndNormalizeKey(String keyName) > throws OMException { > String normalizedKeyName; > if (keyName.startsWith(OM_KEY_PREFIX)) { > normalizedKeyName = Paths.get(keyName).toUri().normalize().getPath(); > } else { > normalizedKeyName = Paths.get(OM_KEY_PREFIX, keyName).toUri() > .normalize().getPath(); > } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-4363) Add metric to track the number of RocksDB open/close operations
[ https://issues.apache.org/jira/browse/HDDS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-4363: -- Target Version/s: 1.1.0 > Add metric to track the number of RocksDB open/close operations > --- > > Key: HDDS-4363 > URL: https://issues.apache.org/jira/browse/HDDS-4363 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Datanode >Affects Versions: 1.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Aryan Gupta >Priority: Major > > We are benchmarking Ozone performance, and realized RocksDB open/close > operations have huge impact to performance. Each db open takes about 70ms on > average and close takes about 1ms on average. > > Having metrics on these operations will help understand DataNode performance > problems. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-4363) Add metric to track the number of RocksDB open/close operations
Wei-Chiu Chuang created HDDS-4363: - Summary: Add metric to track the number of RocksDB open/close operations Key: HDDS-4363 URL: https://issues.apache.org/jira/browse/HDDS-4363 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Datanode Affects Versions: 1.0.0 Reporter: Wei-Chiu Chuang We are benchmarking Ozone performance, and realized RocksDB open/close operations have huge impact to performance. Each db open takes about 70ms on average and close takes about 1ms on average. Having metrics on these operations will help understand DataNode performance problems. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-4170) Fix typo in method description.
[ https://issues.apache.org/jira/browse/HDDS-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDDS-4170. --- Fix Version/s: 1.1.0 Resolution: Fixed [~harinder.s.bedi] thanks for your contribution. Added you to the contributor list and assigned the Jira to you. > Fix typo in method description. > --- > > Key: HDDS-4170 > URL: https://issues.apache.org/jira/browse/HDDS-4170 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Neo Yang >Assignee: Harinder Singh Bedi >Priority: Trivial > Labels: newbie, pull-request-available > Fix For: 1.1.0 > > > [In this > line|https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/utils/CertificateCodec.java#L288], > the word _X509Cer{color:#ff}i{color}tificate_ is misspelled, it should > be "X509Certificate". -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-4170) Fix typo in method description.
[ https://issues.apache.org/jira/browse/HDDS-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HDDS-4170: - Assignee: Harinder Singh Bedi > Fix typo in method description. > --- > > Key: HDDS-4170 > URL: https://issues.apache.org/jira/browse/HDDS-4170 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Neo Yang >Assignee: Harinder Singh Bedi >Priority: Trivial > Labels: newbie, pull-request-available > > [In this > line|https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/utils/CertificateCodec.java#L288], > the word _X509Cer{color:#ff}i{color}tificate_ is misspelled, it should > be "X509Certificate". -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-4356) SCM is flooded with useless "Deleting blocks" messages
Wei-Chiu Chuang created HDDS-4356: - Summary: SCM is flooded with useless "Deleting blocks" messages Key: HDDS-4356 URL: https://issues.apache.org/jira/browse/HDDS-4356 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: SCM Affects Versions: 1.0.0 Reporter: Wei-Chiu Chuang Testing a 1.0.0 SCM. I'm seeing these messages flooding SCM log file when a dead DN is detected. {noformat} 2020-10-19 13:48:19,642 INFO org.apache.hadoop.hdds.scm.node.DeadNodeHandler: A dead datanode is detected. 9b27c38d-9104-491b-b76b-959dc9dd06a2{ip: 10.12.1.82, host: rhel12.ozone.local, networkLocation: /default, certSerialId: null} 2020-10-19 13:48:24,894 INFO org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer: SCM is informed by OM to delete 1000 blocks 2020-10-19 13:48:24,894 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks 2020-10-19 13:48:24,894 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks 2020-10-19 13:48:24,894 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks 2020-10-19 13:48:24,894 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks 2020-10-19 13:48:24,894 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks 2020-10-19 13:48:24,894 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks 2020-10-19 13:48:24,894 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks 2020-10-19 13:48:24,895 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks 2020-10-19 13:48:24,895 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks 2020-10-19 13:48:24,895 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: Deleting blocks {noformat} SCM deletes 1000 blocks max at a time, and the "Deleting blocks" message repeats for 1000 times. Worse, it doesn't give any useful information (at the very least, it should print the block id) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-4164) OM client request fails with "failed to commit as key is not found in OpenKey table"
[ https://issues.apache.org/jira/browse/HDDS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17213426#comment-17213426 ] Wei-Chiu Chuang commented on HDDS-4164: --- I hit this exact same issue. After patching my cluster with HDDS-4262, the bug went away. So I think we're good to close this one. > OM client request fails with "failed to commit as key is not found in OpenKey > table" > > > Key: HDDS-4164 > URL: https://issues.apache.org/jira/browse/HDDS-4164 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: OM HA >Reporter: Lokesh Jain >Assignee: Bharat Viswanadham >Priority: Blocker > > OM client request fails with "failed to commit as key is not found in OpenKey > table" > {code:java} > 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred > since the start of call #28868 $Proxy17.submitRequest over > nodeId=om3,nodeAddress=vc1330.halxg.cloudera.com:9862 > 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred > since the start of call #28870 $Proxy17.submitRequest over > nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862 > 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred > since the start of call #28869 $Proxy17.submitRequest over > nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862 > 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred > since the start of call #28871 $Proxy17.submitRequest over > nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862 > 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred > since the start of call #28872 $Proxy17.submitRequest over > nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862 > 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred > since the start of call #28866 $Proxy17.submitRequest over > nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862 > 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred > since the start of call #28867 $Proxy17.submitRequest over > nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862 > 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred > since the start of call #28874 $Proxy17.submitRequest over > nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862 > 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred > since the start of call #28875 $Proxy17.submitRequest over > nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862 > 20/08/28 03:21:54 ERROR freon.BaseFreonGenerator: Error on executing task > 14424 > KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Failed to > commit key, as /vol1/bucket1/akjkdz4hoj/14424/104766512182520809entry is not > found in the OpenKey table > at > org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593) > at > org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.commitKey(OzoneManagerProtocolClientSideTranslatorPB.java:650) > at > org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.commitKey(BlockOutputStreamEntryPool.java:306) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:514) > at > org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:60) > at > org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.lambda$createKey$0(OzoneClientKeyGenerator.java:118) > at com.codahale.metrics.Timer.time(Timer.java:101) > at > org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.createKey(OzoneClientKeyGenerator.java:113) > at > org.apache.hadoop.ozone.freon.BaseFreonGenerator.tryNextTask(BaseFreonGenerator.java:178) > at > org.apache.hadoop.ozone.freon.BaseFreonGenerator.taskLoop(BaseFreonGenerator.java:167) > at > org.apache.hadoop.ozone.freon.BaseFreonGenerator.lambda$startTaskRunners$0(BaseFreonGenerator.java:150) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-4338) SCM web UI banner shows "HDFS SCM"
[ https://issues.apache.org/jira/browse/HDDS-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-4338: -- Affects Version/s: 1.0.0 > SCM web UI banner shows "HDFS SCM" > -- > > Key: HDDS-4338 > URL: https://issues.apache.org/jira/browse/HDDS-4338 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: Wei-Chiu Chuang >Priority: Trivial > Attachments: Screen Shot 2020-10-12 at 6.42.31 PM.png > > > !Screen Shot 2020-10-12 at 6.42.31 PM.png! Let's call it Ozone SCM, shall we? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-4338) SCM web UI banner shows "HDFS SCM"
[ https://issues.apache.org/jira/browse/HDDS-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-4338: -- Target Version/s: 1.1.0 > SCM web UI banner shows "HDFS SCM" > -- > > Key: HDDS-4338 > URL: https://issues.apache.org/jira/browse/HDDS-4338 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: Wei-Chiu Chuang >Priority: Trivial > Attachments: Screen Shot 2020-10-12 at 6.42.31 PM.png > > > !Screen Shot 2020-10-12 at 6.42.31 PM.png! Let's call it Ozone SCM, shall we? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-4338) SCM web UI banner shows "HDFS SCM"
Wei-Chiu Chuang created HDDS-4338: - Summary: SCM web UI banner shows "HDFS SCM" Key: HDDS-4338 URL: https://issues.apache.org/jira/browse/HDDS-4338 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Wei-Chiu Chuang Attachments: Screen Shot 2020-10-12 at 6.42.31 PM.png !Screen Shot 2020-10-12 at 6.42.31 PM.png! Let's call it Ozone SCM, shall we? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-4327) Potential resource leakage using BatchOperation
[ https://issues.apache.org/jira/browse/HDDS-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17210488#comment-17210488 ] Wei-Chiu Chuang commented on HDDS-4327: --- One of them is in the code: https://github.com/apache/hadoop-ozone/blob/f25418329cfc5d7194c632b72a9617f5bb7a4bb6/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java#L3569 Others are mostly in tests. > Potential resource leakage using BatchOperation > --- > > Key: HDDS-4327 > URL: https://issues.apache.org/jira/browse/HDDS-4327 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Priority: Major > > there are a number of places in the code where BatchOperation is used but not > closed. As a best practice, better to close them explicitly. > I have a stress test code that uses BatchOperation to insert into OM rocksdb. > Without closing BatchOperation explicitly, the process crashes after just a > few minutes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-4327) Potential resource leakage using BatchOperation
Wei-Chiu Chuang created HDDS-4327: - Summary: Potential resource leakage using BatchOperation Key: HDDS-4327 URL: https://issues.apache.org/jira/browse/HDDS-4327 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Wei-Chiu Chuang there are a number of places in the code where BatchOperation is used but not closed. As a best practice, better to close them explicitly. I have a stress test code that uses BatchOperation to insert into OM rocksdb. Without closing BatchOperation explicitly, the process crashes after just a few minutes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-4269) Ozone DataNode thinks a volume is failed if an unexpected file is in the HDDS root directory
[ https://issues.apache.org/jira/browse/HDDS-4269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-4269: -- Labels: newbie (was: ) > Ozone DataNode thinks a volume is failed if an unexpected file is in the HDDS > root directory > > > Key: HDDS-4269 > URL: https://issues.apache.org/jira/browse/HDDS-4269 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 1.1.0 >Reporter: Wei-Chiu Chuang >Priority: Major > Labels: newbie > > Took me some time to debug a trivial bug. > DataNode crashes after this mysterious error and no explanation: > {noformat} > 10:11:44.382 PM INFOMutableVolumeSetMoving Volume : > /var/lib/hadoop-ozone/fake_datanode/data/hdds to failed Volumes > 10:11:46.287 PM ERROR StateContextCritical error occurred in > StateMachine, setting shutDownMachine > 10:11:46.287 PM ERROR DatanodeStateMachineDatanodeStateMachine > Shutdown due to an critical error > {noformat} > Turns out that if there are unexpected files under the hdds directory > ($hdds.datanode.dir/hdds), DN thinks the volume is bad and move it to failed > volume list, without an error explanation. I was editing the VERSION file and > vim created a temp file under the directory. This is impossible to debug > without reading the code. > {code:java|title=HddsVolumeUtil#checkVolume()} > } else if(hddsFiles.length == 2) { > // The files should be Version and SCM directory > if (scmDir.exists()) { > return true; > } else { > logger.error("Volume {} is in Inconsistent state, expected scm " + > "directory {} does not exist", volumeRoot, scmDir > .getAbsolutePath()); > return false; > } > } else { > // The hdds root dir should always have 2 files. One is Version file > // and other is SCM directory. > < HERE! > return false; > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-4269) Ozone DataNode thinks a volume is failed if an unexpected file is in the HDDS root directory
Wei-Chiu Chuang created HDDS-4269: - Summary: Ozone DataNode thinks a volume is failed if an unexpected file is in the HDDS root directory Key: HDDS-4269 URL: https://issues.apache.org/jira/browse/HDDS-4269 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Affects Versions: 1.1.0 Reporter: Wei-Chiu Chuang Took me some time to debug a trivial bug. DataNode crashes after this mysterious error and no explanation: {noformat} 10:11:44.382 PM INFOMutableVolumeSetMoving Volume : /var/lib/hadoop-ozone/fake_datanode/data/hdds to failed Volumes 10:11:46.287 PM ERROR StateContextCritical error occurred in StateMachine, setting shutDownMachine 10:11:46.287 PM ERROR DatanodeStateMachineDatanodeStateMachine Shutdown due to an critical error {noformat} Turns out that if there are unexpected files under the hdds directory ($hdds.datanode.dir/hdds), DN thinks the volume is bad and move it to failed volume list, without an error explanation. I was editing the VERSION file and vim created a temp file under the directory. This is impossible to debug without reading the code. {code:java|title=HddsVolumeUtil#checkVolume()} } else if(hddsFiles.length == 2) { // The files should be Version and SCM directory if (scmDir.exists()) { return true; } else { logger.error("Volume {} is in Inconsistent state, expected scm " + "directory {} does not exist", volumeRoot, scmDir .getAbsolutePath()); return false; } } else { // The hdds root dir should always have 2 files. One is Version file // and other is SCM directory. < HERE! return false; } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3589) Support running HBase on Ozone.
[ https://issues.apache.org/jira/browse/HDDS-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191867#comment-17191867 ] Wei-Chiu Chuang commented on HDDS-3589: --- One more question ... what version of HBase was it tested with? > Support running HBase on Ozone. > --- > > Key: HDDS-3589 > URL: https://issues.apache.org/jira/browse/HDDS-3589 > Project: Hadoop Distributed Data Store > Issue Type: New Feature >Reporter: Sadanand Shenoy >Assignee: Sadanand Shenoy >Priority: Major > Attachments: Hflush_impl.patch > > > The aim of this Jira is to support Hbase to run on top of Ozone. In order to > achieve this , the Syncable interface was implemented which contains the > hflush() API which basically commits an open key into OM. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3929) Prettify OMDeleteRequest error log
[ https://issues.apache.org/jira/browse/HDDS-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-3929: -- Priority: Trivial (was: Major) > Prettify OMDeleteRequest error log > -- > > Key: HDDS-3929 > URL: https://issues.apache.org/jira/browse/HDDS-3929 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > > {noformat} > 2020-07-06 21:57:04,266 ERROR > org.apache.hadoop.ozone.om.request.key.OMKeyDeleteRequest: Key delete failed. > Volume:weichiu-test, Bucket:weichiu-bucket, > Keytable_dir/_impala_insert_staging/8f49c4cce657919b_e77ca449. > Exception:{} > KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Key not found > at > ... > {noformat} > The "Key" and "table_dir" should be separted. Also, the exception message > doesn't require parameterization. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-3929) Prettify OMDeleteRequest error log
[ https://issues.apache.org/jira/browse/HDDS-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HDDS-3929: - Assignee: Wei-Chiu Chuang > Prettify OMDeleteRequest error log > -- > > Key: HDDS-3929 > URL: https://issues.apache.org/jira/browse/HDDS-3929 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > {noformat} > 2020-07-06 21:57:04,266 ERROR > org.apache.hadoop.ozone.om.request.key.OMKeyDeleteRequest: Key delete failed. > Volume:weichiu-test, Bucket:weichiu-bucket, > Keytable_dir/_impala_insert_staging/8f49c4cce657919b_e77ca449. > Exception:{} > KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Key not found > at > ... > {noformat} > The "Key" and "table_dir" should be separted. Also, the exception message > doesn't require parameterization. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3929) Prettify OMDeleteRequest error log
Wei-Chiu Chuang created HDDS-3929: - Summary: Prettify OMDeleteRequest error log Key: HDDS-3929 URL: https://issues.apache.org/jira/browse/HDDS-3929 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Wei-Chiu Chuang {noformat} 2020-07-06 21:57:04,266 ERROR org.apache.hadoop.ozone.om.request.key.OMKeyDeleteRequest: Key delete failed. Volume:weichiu-test, Bucket:weichiu-bucket, Keytable_dir/_impala_insert_staging/8f49c4cce657919b_e77ca449. Exception:{} KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Key not found at ... {noformat} The "Key" and "table_dir" should be separted. Also, the exception message doesn't require parameterization. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3786) Tone down failover message
[ https://issues.apache.org/jira/browse/HDDS-3786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17133790#comment-17133790 ] Wei-Chiu Chuang commented on HDDS-3786: --- (It shouldn't go without saying that this is an OM HA setup) > Tone down failover message > -- > > Key: HDDS-3786 > URL: https://issues.apache.org/jira/browse/HDDS-3786 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Client >Reporter: Wei-Chiu Chuang >Priority: Minor > > For whatever client side operation, the client typically don't hit the leader > the first time, and the client emits the following message. This is a minor > issue but kind of scary for a user new to ozone. > {noformat} > 20/06/11 23:40:57 INFO Configuration.deprecation: mapred.task.timeout is > deprecated. Instead, use mapreduce.task.timeout > 20/06/11 23:40:58 INFO ha.OMFailoverProxyProvider: RetryProxy: OM:om1 is not > the leader. Suggested leader is OM:om2. > at > org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:185) > at > org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:173) > at > org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:109) > at > org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:74) > at > org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:99) > at > org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:985) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:913) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2882) > {noformat} > Can we tone down this a little? At the very least, don't print the stack > trace. > Tried this on a beta cluster. Let me know if this is already improved in the > master branch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3786) Tone down failover message
Wei-Chiu Chuang created HDDS-3786: - Summary: Tone down failover message Key: HDDS-3786 URL: https://issues.apache.org/jira/browse/HDDS-3786 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Client Reporter: Wei-Chiu Chuang For whatever client side operation, the client typically don't hit the leader the first time, and the client emits the following message. This is a minor issue but kind of scary for a user new to ozone. {noformat} 20/06/11 23:40:57 INFO Configuration.deprecation: mapred.task.timeout is deprecated. Instead, use mapreduce.task.timeout 20/06/11 23:40:58 INFO ha.OMFailoverProxyProvider: RetryProxy: OM:om1 is not the leader. Suggested leader is OM:om2. at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:185) at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:173) at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:109) at org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:74) at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:99) at org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:985) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:913) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2882) {noformat} Can we tone down this a little? At the very least, don't print the stack trace. Tried this on a beta cluster. Let me know if this is already improved in the master branch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3775) Add documentation for flame graph
Wei-Chiu Chuang created HDDS-3775: - Summary: Add documentation for flame graph Key: HDDS-3775 URL: https://issues.apache.org/jira/browse/HDDS-3775 Project: Hadoop Distributed Data Store Issue Type: Task Reporter: Wei-Chiu Chuang HDDS-1116 added flame graph but looks like there's no documentation to enable it. To enable it, add configuration hdds.profiler.endpoint.enabled = true to ozone-site.xml download the profiler from https://github.com/jvm-profiling-tools/async-profiler to a local directory, say /tmp and start the DataNode with java system property -Dasync.profiler.home=/tmp or environment variable $ASYNC_PROFILER_HOME and then go to the datanode servlet, say dn1:9883/prof to see the graph. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3589) Support running HBase on Ozone.
[ https://issues.apache.org/jira/browse/HDDS-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17110624#comment-17110624 ] Wei-Chiu Chuang commented on HDDS-3589: --- Yes please make it a PR. Some quick comments: What is the version of HBase tested? How was it tested? Does HBase store HFiles on Ozone, or does it write WAL on Ozone? {code} public void hflush() throws IOException { Thread.dumpStack(); {code} The Thread.dumpStack() looks redundant. You want to remove it in production code. The hasCapability() was added by HDFS-11644, Hadoop 2.9 and above. This would limit Ozone's Hadoop support. > Support running HBase on Ozone. > --- > > Key: HDDS-3589 > URL: https://issues.apache.org/jira/browse/HDDS-3589 > Project: Hadoop Distributed Data Store > Issue Type: New Feature >Reporter: Sadanand Shenoy >Assignee: Sadanand Shenoy >Priority: Major > Attachments: Hflush_impl.patch > > > The aim of this Jira is to support Hbase to run on top of Ozone. In order to > achieve this , the Syncable interface was implemented which contains the > hflush() API which basically commits an open key into OM. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3543) Remove unused joda-time
[ https://issues.apache.org/jira/browse/HDDS-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-3543: -- Description: Joda-time is defined in the pom.xml but it's not used anywhere. It should be easy to remove it without problems. (was: Joda-time is defined in the hadoop-project/pom.xml but it's not used anywhere. It should be easy to remove it without problems.) > Remove unused joda-time > --- > > Key: HDDS-3543 > URL: https://issues.apache.org/jira/browse/HDDS-3543 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Wei-Chiu Chuang >Priority: Major > > Joda-time is defined in the pom.xml but it's not used anywhere. It should be > easy to remove it without problems. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3543) Remove unused joda-time
[ https://issues.apache.org/jira/browse/HDDS-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-3543: -- Summary: Remove unused joda-time (was: CLONE - Remove unused joda-time) > Remove unused joda-time > --- > > Key: HDDS-3543 > URL: https://issues.apache.org/jira/browse/HDDS-3543 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Wei-Chiu Chuang >Priority: Major > > Joda-time is defined in the hadoop-project/pom.xml but it's not used > anywhere. It should be easy to remove it without problems. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Moved] (HDDS-3543) CLONE - Remove unused joda-time
[ https://issues.apache.org/jira/browse/HDDS-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang moved HADOOP-17031 to HDDS-3543: Key: HDDS-3543 (was: HADOOP-17031) Workflow: patch-available, re-open possible (was: no-reopen-closed, patch-avail) Project: Hadoop Distributed Data Store (was: Hadoop Common) > CLONE - Remove unused joda-time > --- > > Key: HDDS-3543 > URL: https://issues.apache.org/jira/browse/HDDS-3543 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Wei-Chiu Chuang >Priority: Major > > Joda-time is defined in the hadoop-project/pom.xml but it's not used > anywhere. It should be easy to remove it without problems. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3399) Update JaegerTracing
Wei-Chiu Chuang created HDDS-3399: - Summary: Update JaegerTracing Key: HDDS-3399 URL: https://issues.apache.org/jira/browse/HDDS-3399 Project: Hadoop Distributed Data Store Issue Type: Task Reporter: Wei-Chiu Chuang We currently use JaegerTracing 0.34.0. The latest is 1.2.0. We are several versions behind and should update. Note this update requires the latest version fo OpenTracing and has several breaking changes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3398) Update grpc-netty
Wei-Chiu Chuang created HDDS-3398: - Summary: Update grpc-netty Key: HDDS-3398 URL: https://issues.apache.org/jira/browse/HDDS-3398 Project: Hadoop Distributed Data Store Issue Type: Task Reporter: Wei-Chiu Chuang Ozone currently use grpc-netty 1.17.1. The latest version is 1.28.1. We are several versions behind and should update. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3384) Update SpringFramework to 5.1.14
[ https://issues.apache.org/jira/browse/HDDS-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-3384: -- Summary: Update SpringFramework to 5.1.14 (was: Update SpringFramework) > Update SpringFramework to 5.1.14 > > > Key: HDDS-3384 > URL: https://issues.apache.org/jira/browse/HDDS-3384 > Project: Hadoop Distributed Data Store > Issue Type: Task >Affects Versions: 0.4.1 >Reporter: Wei-Chiu Chuang >Priority: Major > > We are on SpringFramework 5.1.3. We should update to newer versions (5.1.14 > or 5.2.x) > Also, > {code:java|title=hadoop-ozone/recon-codegen/pom.xml} > > org.springframework > spring-jdbc > 5.1.3.RELEASE > > {code} > It should specify the version with ${{{spring.version}}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3384) Update SpringFramework
Wei-Chiu Chuang created HDDS-3384: - Summary: Update SpringFramework Key: HDDS-3384 URL: https://issues.apache.org/jira/browse/HDDS-3384 Project: Hadoop Distributed Data Store Issue Type: Task Affects Versions: 0.4.1 Reporter: Wei-Chiu Chuang We are on SpringFramework 5.1.3. We should update to newer versions (5.1.14 or 5.2.x) Also, {code:java|title=hadoop-ozone/recon-codegen/pom.xml} org.springframework spring-jdbc 5.1.3.RELEASE {code} It should specify the version with ${{{spring.version}}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-3383) Update Netty to 4.1.48.Final
[ https://issues.apache.org/jira/browse/HDDS-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDDS-3383. --- Resolution: Later HDDS-3177 updated to Netty 4.1.47. We update it again later. No need to do it right now. > Update Netty to 4.1.48.Final > > > Key: HDDS-3383 > URL: https://issues.apache.org/jira/browse/HDDS-3383 > Project: Hadoop Distributed Data Store > Issue Type: Task >Affects Versions: 0.5.0 >Reporter: Wei-Chiu Chuang >Priority: Major > > We are currently on Netty 4.1.45.Final. We should update to the latest > 4.1.48.Final -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Moved] (HDDS-3383) CLONE - Update Netty to 4.1.48.Final
[ https://issues.apache.org/jira/browse/HDDS-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang moved HADOOP-16983 to HDDS-3383: Key: HDDS-3383 (was: HADOOP-16983) Affects Version/s: (was: 3.3.0) 0.5.0 Workflow: patch-available, re-open possible (was: no-reopen-closed, patch-avail) Project: Hadoop Distributed Data Store (was: Hadoop Common) > CLONE - Update Netty to 4.1.48.Final > > > Key: HDDS-3383 > URL: https://issues.apache.org/jira/browse/HDDS-3383 > Project: Hadoop Distributed Data Store > Issue Type: Task >Affects Versions: 0.5.0 >Reporter: Wei-Chiu Chuang >Priority: Major > > We are currently on Netty 4.1.45.Final. We should update to the latest > 4.1.48.Final -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3383) Update Netty to 4.1.48.Final
[ https://issues.apache.org/jira/browse/HDDS-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-3383: -- Summary: Update Netty to 4.1.48.Final (was: CLONE - Update Netty to 4.1.48.Final) > Update Netty to 4.1.48.Final > > > Key: HDDS-3383 > URL: https://issues.apache.org/jira/browse/HDDS-3383 > Project: Hadoop Distributed Data Store > Issue Type: Task >Affects Versions: 0.5.0 >Reporter: Wei-Chiu Chuang >Priority: Major > > We are currently on Netty 4.1.45.Final. We should update to the latest > 4.1.48.Final -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3377) Remove guava 26.0-android jar
[ https://issues.apache.org/jira/browse/HDDS-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-3377: -- Description: I missed this during HDDS-3000 guava-26.0-android is not used but if it's in the classpath (copied explicitly in pom file), it could potentially load this one and cause runtime error. {noformat} $ find . -name guava* ./hadoop-ozone/ozonefs-lib-legacy/target/classes/libs/META-INF/maven/com.google.guava/guava ./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-26.0-android.jar ./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-28.2-jre.jar {noformat} was: I missed this during HDDS-3000 guava-26.0-android is not used but if it's in the classpath (copied explicitly in pom file), it could potentially load this one and cause runtime error. > Remove guava 26.0-android jar > - > > Key: HDDS-3377 > URL: https://issues.apache.org/jira/browse/HDDS-3377 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > I missed this during HDDS-3000 > guava-26.0-android is not used but if it's in the classpath (copied > explicitly in pom file), it could potentially load this one and cause runtime > error. > {noformat} > $ find . -name guava* > ./hadoop-ozone/ozonefs-lib-legacy/target/classes/libs/META-INF/maven/com.google.guava/guava > ./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-26.0-android.jar > ./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-28.2-jre.jar > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3377) Remove guava 26.0-android jar
[ https://issues.apache.org/jira/browse/HDDS-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080857#comment-17080857 ] Wei-Chiu Chuang commented on HDDS-3377: --- Looking at git history, this guava was added in HDDS-1382. [~elek] thoughts? > Remove guava 26.0-android jar > - > > Key: HDDS-3377 > URL: https://issues.apache.org/jira/browse/HDDS-3377 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > I missed this during HDDS-3000 > guava-26.0-android is not used but if it's in the classpath (copied > explicitly in pom file), it could potentially load this one and cause runtime > error. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-3377) Remove guava 26.0-android jar
[ https://issues.apache.org/jira/browse/HDDS-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HDDS-3377: - Assignee: Wei-Chiu Chuang > Remove guava 26.0-android jar > - > > Key: HDDS-3377 > URL: https://issues.apache.org/jira/browse/HDDS-3377 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > I missed this during HDDS-3000 > guava-26.0-android is not used but if it's in the classpath (copied > explicitly in pom file), it could potentially load this one and cause runtime > error. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3377) Remove guava 26.0-android jar
Wei-Chiu Chuang created HDDS-3377: - Summary: Remove guava 26.0-android jar Key: HDDS-3377 URL: https://issues.apache.org/jira/browse/HDDS-3377 Project: Hadoop Distributed Data Store Issue Type: Task Reporter: Wei-Chiu Chuang I missed this during HDDS-3000 guava-26.0-android is not used but if it's in the classpath (copied explicitly in pom file), it could potentially load this one and cause runtime error. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2686) Use protobuf 3 instead of protobuf 2
[ https://issues.apache.org/jira/browse/HDDS-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17071831#comment-17071831 ] Wei-Chiu Chuang commented on HDDS-2686: --- Looks like this'll require Hadoop 3.3. Link HDDS-3292 to the jira. > Use protobuf 3 instead of protobuf 2 > > > Key: HDDS-2686 > URL: https://issues.apache.org/jira/browse/HDDS-2686 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Protobuf2 is 4.5 years old, Hadoop trunk already upgraded to use 3.x protobuf. > > Would be great to use recent protobuf version which can also provide > performance benefit and using new features. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3292) Support Hadoop 3.3
Wei-Chiu Chuang created HDDS-3292: - Summary: Support Hadoop 3.3 Key: HDDS-3292 URL: https://issues.apache.org/jira/browse/HDDS-3292 Project: Hadoop Distributed Data Store Issue Type: Task Reporter: Wei-Chiu Chuang Hadoop 3.3.0 is coming out soon. We should start testing Ozone on Hadoop 3.3 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3177) Periodic dependency update (Java)
[ https://issues.apache.org/jira/browse/HDDS-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059134#comment-17059134 ] Wei-Chiu Chuang commented on HDDS-3177: --- Forgot to attach the OSWAP dependency check report. > Periodic dependency update (Java) > - > > Key: HDDS-3177 > URL: https://issues.apache.org/jira/browse/HDDS-3177 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Attila Doroszlai >Priority: Major > Attachments: dependency-check-report.html > > > Must: > jackson-databind2.9.9 --> 2.10.3 > netty-all 4.0.52 --> 4.1.46 > nimbus-jose-jwt 4.41.1 --> 7.9 (or remove it?) > Nice to have: > cdi-api 1.2 --> 2.0.SP1 (major version change) > hadoop 3.2.0 --> 3.2.1 > === > protobuf 2.5.0 --> ? this is more controversial -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3177) Periodic dependency update (Java)
[ https://issues.apache.org/jira/browse/HDDS-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-3177: -- Attachment: dependency-check-report.html > Periodic dependency update (Java) > - > > Key: HDDS-3177 > URL: https://issues.apache.org/jira/browse/HDDS-3177 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Attila Doroszlai >Priority: Major > Attachments: dependency-check-report.html > > > Must: > jackson-databind2.9.9 --> 2.10.3 > netty-all 4.0.52 --> 4.1.46 > nimbus-jose-jwt 4.41.1 --> 7.9 (or remove it?) > Nice to have: > cdi-api 1.2 --> 2.0.SP1 (major version change) > hadoop 3.2.0 --> 3.2.1 > === > protobuf 2.5.0 --> ? this is more controversial -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3177) Periodic dependency update (Java)
Wei-Chiu Chuang created HDDS-3177: - Summary: Periodic dependency update (Java) Key: HDDS-3177 URL: https://issues.apache.org/jira/browse/HDDS-3177 Project: Hadoop Distributed Data Store Issue Type: Task Reporter: Wei-Chiu Chuang Must: jackson-databind2.9.9 --> 2.10.3 netty-all 4.0.52 --> 4.1.46 nimbus-jose-jwt 4.41.1 --> 7.9 (or remove it?) Nice to have: cdi-api 1.2 --> 2.0.SP1 (major version change) hadoop 3.2.0 --> 3.2.1 === protobuf 2.5.0 --> ? this is more controversial -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3176) Remove unused dependency version strings
[ https://issues.apache.org/jira/browse/HDDS-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-3176: -- Labels: newbie (was: ) > Remove unused dependency version strings > > > Key: HDDS-3176 > URL: https://issues.apache.org/jira/browse/HDDS-3176 > Project: Hadoop Distributed Data Store > Issue Type: Task >Affects Versions: 0.5.0 >Reporter: Wei-Chiu Chuang >Priority: Minor > Labels: newbie > > After the repo was split from hadoop, there are a few unused > dependencies/version strings left in pom.xml. They can be removed. > Example: > {code} > 1.2.6 > 2.0.0-beta-1 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3176) Remove unused dependency version strings
[ https://issues.apache.org/jira/browse/HDDS-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDDS-3176: -- Description: After the repo was split from hadoop, there are a few unused dependencies/version strings left in pom.xml. They can be removed. Example: {code} 1.2.6 2.0.0-beta-1 {code} There may be more. was: After the repo was split from hadoop, there are a few unused dependencies/version strings left in pom.xml. They can be removed. Example: {code} 1.2.6 2.0.0-beta-1 {code} > Remove unused dependency version strings > > > Key: HDDS-3176 > URL: https://issues.apache.org/jira/browse/HDDS-3176 > Project: Hadoop Distributed Data Store > Issue Type: Task >Affects Versions: 0.5.0 >Reporter: Wei-Chiu Chuang >Priority: Minor > Labels: newbie > > After the repo was split from hadoop, there are a few unused > dependencies/version strings left in pom.xml. They can be removed. > Example: > {code} > 1.2.6 > 2.0.0-beta-1 > {code} > There may be more. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3176) Remove unused dependency version strings
Wei-Chiu Chuang created HDDS-3176: - Summary: Remove unused dependency version strings Key: HDDS-3176 URL: https://issues.apache.org/jira/browse/HDDS-3176 Project: Hadoop Distributed Data Store Issue Type: Task Affects Versions: 0.5.0 Reporter: Wei-Chiu Chuang After the repo was split from hadoop, there are a few unused dependencies/version strings left in pom.xml. They can be removed. Example: {code} 1.2.6 2.0.0-beta-1 {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-3000) Update guava version to 28.2-jre
[ https://issues.apache.org/jira/browse/HDDS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HDDS-3000: - Assignee: Wei-Chiu Chuang > Update guava version to 28.2-jre > > > Key: HDDS-3000 > URL: https://issues.apache.org/jira/browse/HDDS-3000 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3000) Update guava version to 28.2-jre
Wei-Chiu Chuang created HDDS-3000: - Summary: Update guava version to 28.2-jre Key: HDDS-3000 URL: https://issues.apache.org/jira/browse/HDDS-3000 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Wei-Chiu Chuang -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2443) Python client/interface for Ozone
[ https://issues.apache.org/jira/browse/HDDS-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995138#comment-16995138 ] Wei-Chiu Chuang commented on HDDS-2443: --- Thanks for the doc. It looks so easy to support Python clients! Could you make the instruction under Ozone Recipes? https://hadoop.apache.org/ozone/docs/0.4.1-alpha/recipe.html > Python client/interface for Ozone > - > > Key: HDDS-2443 > URL: https://issues.apache.org/jira/browse/HDDS-2443 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Client >Reporter: Li Cheng >Priority: Major > Attachments: Ozone with pyarrow.html, Ozone with pyarrow.odt, > OzoneS3.py > > > Original ideas: item#25 in > [https://cwiki.apache.org/confluence/display/HADOOP/Ozone+project+ideas+for+new+contributors] > Ozone Client(Python) for Data Science Notebook such as Jupyter. > # Size: Large > # PyArrow: [https://pypi.org/project/pyarrow/] > # Python -> libhdfs HDFS JNI library (HDFS, S3,...) -> Java client API > Impala uses libhdfs > > Path to try: > # s3 interface: Ozone s3 gateway(already supported) + AWS python client > (boto3) > # python native RPC > # pyarrow + libhdfs, which use the Java client under the hood. > # python + C interface of go / rust ozone library. I created POC go / rust > clients earlier which can be improved if the libhdfs interface is not good > enough. [By [~elek]] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2708) Translate docs to Chinese
[ https://issues.apache.org/jira/browse/HDDS-2708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994020#comment-16994020 ] Wei-Chiu Chuang commented on HDDS-2708: --- Let's do one ticket/PR per doc file. Thanks! > Translate docs to Chinese > - > > Key: HDDS-2708 > URL: https://issues.apache.org/jira/browse/HDDS-2708 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: upgrade >Reporter: Xiang Zhang >Assignee: Xiang Zhang >Priority: Major > > According to > [https://cwiki.apache.org/confluence/display/HADOOP/Ozone+project+ideas+for+new+contributors], > I understand that Chinese docs are needed. I am interested in this, could > somebody give me some advice to get started ? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2708) Translate docs to Chinese
[ https://issues.apache.org/jira/browse/HDDS-2708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993676#comment-16993676 ] Wei-Chiu Chuang commented on HDDS-2708: --- I'm interested in reviewing. Count me in! > Translate docs to Chinese > - > > Key: HDDS-2708 > URL: https://issues.apache.org/jira/browse/HDDS-2708 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: upgrade >Reporter: Xiang Zhang >Assignee: Xiang Zhang >Priority: Major > > According to > [https://cwiki.apache.org/confluence/display/HADOOP/Ozone+project+ideas+for+new+contributors], > I understand that Chinese docs are needed. I am interested in this, could > somebody give me some advice to get started ? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2703) OzoneFSInputStream to support ByteBufferReadable
Wei-Chiu Chuang created HDDS-2703: - Summary: OzoneFSInputStream to support ByteBufferReadable Key: HDDS-2703 URL: https://issues.apache.org/jira/browse/HDDS-2703 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Wei-Chiu Chuang This was found by [~cxorm] via HDDS-2443. ByteBufferReadable could help certain application performance, such as Impala. (See HDFS-14111) Additionally, if we support ByteBufferPositionedReadable, it would benefit HBase. (see HDFS-3246) Finally, we should add StreamCapabilities to let client probe for these abilities. (See HDFS-11644) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org