[jira] [Updated] (HDDS-2875) Add a config in ozone to tune max outstanding requests in raft client

2020-01-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2875:
-
Labels: pull-request-available  (was: )

> Add a config in ozone to tune max outstanding requests in raft client
> -
>
> Key: HDDS-2875
> URL: https://issues.apache.org/jira/browse/HDDS-2875
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>
> Add a config to tune the config value of 
> "raft.client.async.outstanding-requests.max" config in raft client. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #436: HDDS-2875. Add a config in ozone to tune max outstanding requests in …

2020-01-10 Thread GitBox
bharatviswa504 opened a new pull request #436: HDDS-2875. Add a config in ozone 
to tune max outstanding requests in …
URL: https://github.com/apache/hadoop-ozone/pull/436
 
 
   …raft client.
   
   ## What changes were proposed in this pull request?
   
   Add a config to tune the config value of 
"raft.client.async.outstanding-requests.max" config in raft client. 
   
   There is a property scm.container.client.max.outstanding.requests but this 
is used to set outStandingAppendsMax, but the config of this is mentioned as 
the "Controls the maximum number of outstanding async requests that can be 
handled by the Standalone as well as Ratis client". So, I removed this from 
ScmClientConfig, and created a new class for this property with prefix 
dfs.ratis.client to be consistent with other ratis client properties. And used 
the same for outStandingAppendsMax as before, not sure this is the correct way, 
but to keep same with current code, used it.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2875
   
   Please replace this section with the link to the Apache JIRA)
   
   ## How was this patch tested?
   
   UT and acceptance test run.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl edited a comment on issue #415: HDDS-2840. Implement ofs://: mkdir

2020-01-10 Thread GitBox
smengcl edited a comment on issue #415: HDDS-2840. Implement ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#issuecomment-573229933
 
 
   Status update:
   
   1. This PR is ready for review.
   2. This diff should help for the review, the Base is the commit after class 
copying (except RootedOzoneClientAdapterFactory): 
https://github.com/smengcl/hadoop-ozone/compare/0bee28acadfb2c358c6f008173e7eca6ed7fa23f...smengcl:HDDS-2840
   3. I have also applied HDDS-2188 to affected OFS classes in order for code 
to compile.
   4. Expect all operations INSIDE the same bucket to work, just like the 
current o3fs.
   5. Expect `mkdir -p` to create volume and bucket if they don't exist. For a 
simple test under shell with docker-compose see commit message of 
ba3a21ecc2e27690e8b823bd72fa8d8976ffdb43.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on issue #415: HDDS-2840. Implement ofs://: mkdir

2020-01-10 Thread GitBox
smengcl commented on issue #415: HDDS-2840. Implement ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#issuecomment-573229933
 
 
   Status update:
   
   1. This PR is ready for review.
   2. This diff should help for the review, the Base is the commit after class 
copying (except RootedOzoneClientAdapterFactory): 
https://github.com/smengcl/hadoop-ozone/compare/0bee28acadfb2c358c6f008173e7eca6ed7fa23f...smengcl:HDDS-2840
   3. I have also applied HDDS-2188 to affected OFS classes in order for code 
to compile.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2876) Consolidate ObjectID and UpdateID from Info objects into one class

2020-01-10 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-2876:
-
Description: 
We use ObjectID and BucketID in OMVolumeArgs, OMBucketInfo, OMKeyInfo and 
OMMultipartKeyInfo. We can consolidate by having these Info objects extend a 
"WithObjectID" class which can host the common fields - objectID and updateID.


  was:To check if a transaction is a replay or not, we use the updateID. But 
since this is a newly added field and optional, older Volume, Bucket or KeyInfo 
objects in DB might be missing this field. Hence, before checking if a 
transaction is a replay, we should check that the info object has the updateID.


> Consolidate ObjectID and UpdateID from Info objects into one class
> --
>
> Key: HDDS-2876
> URL: https://issues.apache.org/jira/browse/HDDS-2876
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>
> We use ObjectID and BucketID in OMVolumeArgs, OMBucketInfo, OMKeyInfo and 
> OMMultipartKeyInfo. We can consolidate by having these Info objects extend a 
> "WithObjectID" class which can host the common fields - objectID and updateID.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2876) Consolidate ObjectID and UpdateID from Info objects into one class

2020-01-10 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-2876:
-
Summary: Consolidate ObjectID and UpdateID from Info objects into one class 
 (was: Handle Info object missing UpdateID field before checking for replay)

> Consolidate ObjectID and UpdateID from Info objects into one class
> --
>
> Key: HDDS-2876
> URL: https://issues.apache.org/jira/browse/HDDS-2876
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>
> To check if a transaction is a replay or not, we use the updateID. But since 
> this is a newly added field and optional, older Volume, Bucket or KeyInfo 
> objects in DB might be missing this field. Hence, before checking if a 
> transaction is a replay, we should check that the info object has the 
> updateID.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2876) Handle Info object missing UpdateID field before checking for replay

2020-01-10 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2876:


 Summary: Handle Info object missing UpdateID field before checking 
for replay
 Key: HDDS-2876
 URL: https://issues.apache.org/jira/browse/HDDS-2876
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


To check if a transaction is a replay or not, we use the updateID. But since 
this is a newly added field and optional, older Volume, Bucket or KeyInfo 
objects in DB might be missing this field. Hence, before checking if a 
transaction is a replay, we should check that the info object has the updateID.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #428: HDDS-2868. Add ObjectID and UpdateID to OMKeyInfo.

2020-01-10 Thread GitBox
hanishakoneru commented on a change in pull request #428: HDDS-2868. Add 
ObjectID and UpdateID to OMKeyInfo.
URL: https://github.com/apache/hadoop-ozone/pull/428#discussion_r365400868
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
 ##
 @@ -158,9 +158,10 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   .setOmKeyLocationInfos(Collections.singletonList(
   new OmKeyLocationInfoGroup(0, new ArrayList<>(
   .setAcls(OzoneAclUtil.fromProtobuf(keyArgs.getAclsList()))
+  .setObjectID(transactionLogIndex)
 
 Review comment:
   Thanks for catching this. Updated the patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #428: HDDS-2868. Add ObjectID and UpdateID to OMKeyInfo.

2020-01-10 Thread GitBox
hanishakoneru commented on a change in pull request #428: HDDS-2868. Add 
ObjectID and UpdateID to OMKeyInfo.
URL: https://github.com/apache/hadoop-ozone/pull/428#discussion_r365400719
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
 ##
 @@ -349,7 +349,8 @@ protected OmKeyInfo createKeyInfo(@Nonnull KeyArgs keyArgs,
   @Nonnull HddsProtos.ReplicationType type, long size,
   @Nullable FileEncryptionInfo encInfo,
   @Nonnull PrefixManager prefixManager,
-  @Nullable OmBucketInfo omBucketInfo) {
+  @Nullable OmBucketInfo omBucketInfo,
+  @Nonnull long transactionLogIndex) {
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #428: HDDS-2868. Add ObjectID and UpdateID to OMKeyInfo.

2020-01-10 Thread GitBox
hanishakoneru commented on a change in pull request #428: HDDS-2868. Add 
ObjectID and UpdateID to OMKeyInfo.
URL: https://github.com/apache/hadoop-ozone/pull/428#discussion_r365400759
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
 ##
 @@ -416,12 +419,14 @@ protected OmKeyInfo prepareKeyInfo(
   @Nonnull KeyArgs keyArgs, @Nonnull String dbKeyName, long size,
   @Nonnull List locations,
   @Nullable FileEncryptionInfo encInfo,
-  @Nonnull PrefixManager prefixManager, @Nullable OmBucketInfo 
omBucketInfo)
+  @Nonnull PrefixManager prefixManager,
+  @Nullable OmBucketInfo omBucketInfo,
+  @Nonnull long transactionLogIndex)
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #428: HDDS-2868. Add ObjectID and UpdateID to OMKeyInfo.

2020-01-10 Thread GitBox
hanishakoneru commented on a change in pull request #428: HDDS-2868. Add 
ObjectID and UpdateID to OMKeyInfo.
URL: https://github.com/apache/hadoop-ozone/pull/428#discussion_r365400428
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java
 ##
 @@ -120,6 +124,21 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 multipartKeyInfo = omMetadataManager
 .getMultipartInfoTable().get(multipartKey);
 
+// Set updateID to current transactionLogIndex for all parts
+TreeMap partKeyInfoMap = multipartKeyInfo
+.getPartKeyInfoMap();
+for (Map.Entry partKeyInfoEntry :
+partKeyInfoMap.entrySet()) {
+  PartKeyInfo partKeyInfo = partKeyInfoEntry.getValue();
+  OmKeyInfo currentKeyPartInfo = OmKeyInfo.getFromProtobuf(
 
 Review comment:
   Yes, but this entry is added to DeletedTable. So I thought we should keep a 
record of the transaction which deleted the Key.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2828) Add initial UI of Pipelines in Recon

2020-01-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2828:
-
Labels: pull-request-available  (was: )

> Add initial UI of Pipelines in Recon
> 
>
> Key: HDDS-2828
> URL: https://issues.apache.org/jira/browse/HDDS-2828
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2020-01-02 at 10.40.55 AM.png
>
>
> The Pipelines page in Recon should give the Recon admin users a detailed view 
> of active Ratis Data Pipelines in Ozone file system and its current state. A 
> mockup of what I will try to achieve in the initial version of this pipelines 
> page is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel opened a new pull request #435: HDDS-2828. Add initial UI of Pipelines in Recon

2020-01-10 Thread GitBox
vivekratnavel opened a new pull request #435: HDDS-2828. Add initial UI of 
Pipelines in Recon
URL: https://github.com/apache/hadoop-ozone/pull/435
 
 
   ## What changes were proposed in this pull request?
   
   Addition of initial Pipelines page in Recon. 
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2828
   
   ## How was this patch tested?
   
   This patch was tested by running the react app with mock api server.
   ```
   cd hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web
   yarn install
   yarn run dev
   ```
   A screen-shot of the new Pipelines page is attached below:
   https://user-images.githubusercontent.com/1051198/72179942-09fc4100-340c-11ea-9c4b-afc784431e1e.png;>
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #434: HDDS-2872. ozone.recon.scm.db.dirs missing from ozone-default.xml.

2020-01-10 Thread GitBox
adoroszlai commented on a change in pull request #434: HDDS-2872. 
ozone.recon.scm.db.dirs missing from ozone-default.xml.
URL: https://github.com/apache/hadoop-ozone/pull/434#discussion_r365380319
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
 ##
 @@ -55,7 +55,9 @@ private void addPropertiesNotInXml() {
 HddsConfigKeys.HDDS_SECURITY_PROVIDER,
 OMConfigKeys.OZONE_OM_NODES_KEY,
 OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE,
-OzoneConfigKeys.OZONE_S3_AUTHINFO_MAX_LIFETIME_KEY
+OzoneConfigKeys.OZONE_S3_AUTHINFO_MAX_LIFETIME_KEY,
+"ozone.recon.scm.db.dirs"
 
 Review comment:
   I'm curious: why not use the constant 
`ReconServerConfigKeys.OZONE_RECON_SCM_DB_DIR`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #434: HDDS-2872. ozone.recon.scm.db.dirs missing from ozone-default.xml.

2020-01-10 Thread GitBox
avijayanhwx commented on issue #434: HDDS-2872. ozone.recon.scm.db.dirs missing 
from ozone-default.xml.
URL: https://github.com/apache/hadoop-ozone/pull/434#issuecomment-573146945
 
 
   @adoroszlai Please review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2872) ozone.recon.scm.db.dirs missing from ozone-default.xml

2020-01-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2872:
-
Labels: pull-request-available  (was: )

> ozone.recon.scm.db.dirs missing from ozone-default.xml
> --
>
> Key: HDDS-2872
> URL: https://issues.apache.org/jira/browse/HDDS-2872
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>
> {{ozone.recon.scm.db.dirs}} is reported by {{TestOzoneConfigurationFields}} 
> to be missing from {{ozone-default.xml}}. If it is to be documented, then 
> please add the property to {{ozone-default.xml}}. If it's a developer-only 
> setting, please add as exception in 
> {{TestOzoneConfigurationFields#addPropertiesNotInXml}}.
> (Sorry for reporting this post-commit. {{TestOzoneConfigurationFields}} will 
> be run by CI once we have integration tests enabled again.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx opened a new pull request #434: HDDS-2872. ozone.recon.scm.db.dirs missing from ozone-default.xml.

2020-01-10 Thread GitBox
avijayanhwx opened a new pull request #434: HDDS-2872. ozone.recon.scm.db.dirs 
missing from ozone-default.xml.
URL: https://github.com/apache/hadoop-ozone/pull/434
 
 
   ## What changes were proposed in this pull request?
   Skipping this config in the test since there is an action item to move new 
Recon configs to Java API (HDDS-2856). Added a TODO in the Test.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2872
   
   ## How was this patch tested?
   Ran the test locally.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2856) Recon should use Java based configuration API.

2020-01-10 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-2856:

Description: 
* Configs in org.apache.hadoop.hdds.recon.ReconConfigKeys need to be moved to 
Java based configurations.
* ReconConfigKeys needs to be added to 
org.apache.hadoop.ozone.TestOzoneConfigurationFields.
* Add properties in org.apache.hadoop.ozone.recon.ReconServerConfigKeys that 
are not already in ozone-default.xml

  was:
* Configs in org.apache.hadoop.hdds.recon.ReconConfigKeys need to be moved to 
Java based configurations.
* ReconConfigKeys needs to be added to 
org.apache.hadoop.ozone.TestOzoneConfigurationFields.


> Recon should use Java based configuration API.
> --
>
> Key: HDDS-2856
> URL: https://issues.apache.org/jira/browse/HDDS-2856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Prashant Pogde
>Priority: Major
>  Labels: newbie
>
> * Configs in org.apache.hadoop.hdds.recon.ReconConfigKeys need to be moved to 
> Java based configurations.
> * ReconConfigKeys needs to be added to 
> org.apache.hadoop.ozone.TestOzoneConfigurationFields.
> * Add properties in org.apache.hadoop.ozone.recon.ReconServerConfigKeys that 
> are not already in ozone-default.xml



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel opened a new pull request #433: Hdds 2860 Cluster disk space metrics should reflect decommission and maintenance states

2020-01-10 Thread GitBox
sodonnel opened a new pull request #433: Hdds 2860 Cluster disk space metrics 
should reflect decommission and maintenance states
URL: https://github.com/apache/hadoop-ozone/pull/433
 
 
   # This needs HDDS-2113 committed before this one.
   
   ## What changes were proposed in this pull request?
   
   Now we have decommission states, we need to adjust the cluster capacity, 
space used and available metrics which are exposed via JMX.
   
   For a node decommissioning, the space used on the node effectively needs to 
be transfer to other nodes via container replication before decommission can 
complete, but this is difficult to track from a space usage perspective. When a 
node completes decommission, we can assume it provides no capacity to the 
cluster and uses none. Therefore, for decommissioning + decommissioned nodes, 
the simplest calculation is to exclude the node completely in a similar way to 
a dead node.
   
   For maintenance nodes, things are even less clear. For a maintenance node, 
it is read only so it cannot provide capacity to the cluster, but it is 
expected to return to service, so excluding it completely probably does not 
make sense. However, perhaps the simplest solution is to do the following:
   
   1. For any node not IN_SERVICE, do not include its usage or space in the 
cluster capacity totals.
   2. Introduce some new metrics to account for the maintenance and perhaps 
decommission capacity, so it is not lost eg:
   
   ```
   # Existing metrics
   "DiskCapacity" : 62725623808,
   "DiskUsed" : 4096,
   "DiskRemaining" : 50459619328,
   
   # Suggested additional new ones, with the above only considering IN_SERVICE 
nodes:
   "MaintenanceDiskCapacity": 0
   "MaintenanceDiskUsed": 0
   "MaintenanceDiskRemaining": 0
   "DecommissionedDiskCapacity": 0
   "DecommissionedDiskUsed": 0
   "DecommissionedDiskRemaining": 0
   ...
   ```
   That way, the cluster totals are only what is currently "online", but we 
have the other metrics to track what has been removed etc. The key advantage of 
this, is that it is easy to understand.
   
   There could also be an argument that the new decommissionedDisk metrics are 
not needed as that capacity is technically lost from the cluster forever.
   
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2860
   
   ## How was this patch tested?
   
   Additional unit test was added an manual inspection of the new metrics.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2875) Add a config in ozone to tune max outstanding requests in raft client

2020-01-10 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-2875:
---

Assignee: Bharat Viswanadham  (was: Shashikant Banerjee)

> Add a config in ozone to tune max outstanding requests in raft client
> -
>
> Key: HDDS-2875
> URL: https://issues.apache.org/jira/browse/HDDS-2875
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.5.0
>
>
> Add a config to tune the config value of 
> "raft.client.async.outstanding-requests.max" config in raft client. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on issue #379: HDDS-2750. OzoneFSInputStream to support StreamCapabilities

2020-01-10 Thread GitBox
cxorm commented on issue #379: HDDS-2750. OzoneFSInputStream to support 
StreamCapabilities
URL: https://github.com/apache/hadoop-ozone/pull/379#issuecomment-573015852
 
 
   Sorry for missing here for a while.
   Thanks @adoroszlai for the suggestion of implementation.
   I’m going to fix this issue soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #416: HDDS-2833. Enable integration tests for github actions

2020-01-10 Thread GitBox
adoroszlai opened a new pull request #416: HDDS-2833. Enable integration tests 
for github actions
URL: https://github.com/apache/hadoop-ozone/pull/416
 
 
   ## What changes were proposed in this pull request?
   
   Enable a subset of integration tests:
   
   1. Ignore flaky integration tests
   2. Ignore some tests that were probably broken while integration tests were 
disabled
   3. Fix tests which are flaky due to strict `isAfter` check (changed to 
`!isBefore`)
   4. Delete `TestRatisPipelineProvider` integration test, which was fixed and 
moved to unit tests in HDDS-2365, but HDDS-2034 brought it back
   4. Add missing OM Ratis config properties (without description for now) to 
fix `TestOzoneConfigurationFields`
   5. Introduce Maven profiles in `integration-test` project to run different 
sets of tests (they each take 15-25 minutes)
   6. Run each profile in a separate, parallel check
   
   https://issues.apache.org/jira/browse/HDDS-2833
   
   ## How was this patch tested?
   
   1. Flaky tests to be ignored were found on an [exploratory 
branch](https://github.com/apache/hadoop-ozone/commits/integration-test-cleanup)
 started by @elek.  Had 30 successful runs after adding the latest ignore.
   2. Successful CI run in fork: 
https://github.com/adoroszlai/hadoop-ozone/runs/378082395


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2833) Enable integrations tests for github actions

2020-01-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2833:
-
Labels: pull-request-available  (was: )

> Enable integrations tests for github actions
> 
>
> Key: HDDS-2833
> URL: https://issues.apache.org/jira/browse/HDDS-2833
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Marton Elek
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When we switched to use github actions the integration tests are disabled due 
> to the flakyness.
> We should disable all the flaky tests and enable the remaining integration 
> tests...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #427: HDDS-2866. Intermittent failure in TestOzoneManagerRocksDBLogging

2020-01-10 Thread GitBox
adoroszlai commented on issue #427: HDDS-2866. Intermittent failure in 
TestOzoneManagerRocksDBLogging
URL: https://github.com/apache/hadoop-ozone/pull/427#issuecomment-572949375
 
 
   @avijayanhwx @swagle please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2443) Python client/interface for Ozone

2020-01-10 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-2443:
---
Description: 
This Jira will be used to track development for python client/interface of 
Ozone.


Original ideas: item#25 in 
[https://cwiki.apache.org/confluence/display/HADOOP/Ozone+project+ideas+for+new+contributors]

Ozone Client(Python) for Data Science Notebook such as Jupyter.
 # Size: Large
 # PyArrow: [https://pypi.org/project/pyarrow/]
 # Python -> libhdfs HDFS JNI library (HDFS, S3,...) -> Java client API Impala 
uses  libhdfs

Path to try:
 # s3 interface: Ozone s3 gateway(already supported) + AWS python client (boto3)
 # python native RPC
 # pyarrow + libhdfs, which use the Java client under the hood.
 # python + C interface of go / rust ozone library. I created POC go / rust 
clients earlier which can be improved if the libhdfs interface is not good 
enough. [By [~elek]]

  was:
Original ideas: item#25 in 
[https://cwiki.apache.org/confluence/display/HADOOP/Ozone+project+ideas+for+new+contributors]

Ozone Client(Python) for Data Science Notebook such as Jupyter.
 # Size: Large
 # PyArrow: [https://pypi.org/project/pyarrow/]
 # Python -> libhdfs HDFS JNI library (HDFS, S3,...) -> Java client API Impala 
uses  libhdfs
 
Path to try:
# s3 interface: Ozone s3 gateway(already supported) + AWS python client (boto3)
# python native RPC
# pyarrow + libhdfs, which use the Java client under the hood.
# python + C interface of go / rust ozone library. I created POC go / rust 
clients earlier which can be improved if the libhdfs interface is not good 
enough. [By [~elek]]


> Python client/interface for Ozone
> -
>
> Key: HDDS-2443
> URL: https://issues.apache.org/jira/browse/HDDS-2443
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client
>Reporter: Li Cheng
>Priority: Major
> Attachments: Ozone with pyarrow.html, Ozone with pyarrow.odt, 
> OzoneS3.py
>
>
> This Jira will be used to track development for python client/interface of 
> Ozone.
> Original ideas: item#25 in 
> [https://cwiki.apache.org/confluence/display/HADOOP/Ozone+project+ideas+for+new+contributors]
> Ozone Client(Python) for Data Science Notebook such as Jupyter.
>  # Size: Large
>  # PyArrow: [https://pypi.org/project/pyarrow/]
>  # Python -> libhdfs HDFS JNI library (HDFS, S3,...) -> Java client API 
> Impala uses  libhdfs
> Path to try:
>  # s3 interface: Ozone s3 gateway(already supported) + AWS python client 
> (boto3)
>  # python native RPC
>  # pyarrow + libhdfs, which use the Java client under the hood.
>  # python + C interface of go / rust ozone library. I created POC go / rust 
> clients earlier which can be improved if the libhdfs interface is not good 
> enough. [By [~elek]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md translated to Chinese

2020-01-10 Thread GitBox
cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md 
translated to Chinese
URL: https://github.com/apache/hadoop-ozone/pull/429#discussion_r365114875
 
 

 ##
 File path: hadoop-hdds/docs/content/start/OnPrem.zh.md
 ##
 @@ -20,44 +20,29 @@ weight: 20
   limitations under the License.
 -->
 
-If you are feeling adventurous, you can setup ozone in a real cluster.
-Setting up a real cluster requires us to understand the components of Ozone.
-Ozone is designed to work concurrently with HDFS. However, Ozone is also
-capable of running independently. The components of ozone are the same in both 
approaches.
+如果你喜欢折腾,你可以在真实的集群上安装 ozone。搭建一个 Ozone 集群需要了解它的各个组件,Ozone 既能够以并存的方式部署到现有的 
HDFS,也能够独立运行,但在这两种情况下 ozone 的组件都是相同的。
 
 Review comment:
   Suggestion:
   如果你喜欢折腾 -> 如果你想要有点挑战性
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md translated to Chinese

2020-01-10 Thread GitBox
cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md 
translated to Chinese
URL: https://github.com/apache/hadoop-ozone/pull/429#discussion_r365114875
 
 

 ##
 File path: hadoop-hdds/docs/content/start/OnPrem.zh.md
 ##
 @@ -20,44 +20,29 @@ weight: 20
   limitations under the License.
 -->
 
-If you are feeling adventurous, you can setup ozone in a real cluster.
-Setting up a real cluster requires us to understand the components of Ozone.
-Ozone is designed to work concurrently with HDFS. However, Ozone is also
-capable of running independently. The components of ozone are the same in both 
approaches.
+如果你喜欢折腾,你可以在真实的集群上安装 ozone。搭建一个 Ozone 集群需要了解它的各个组件,Ozone 既能够以并存的方式部署到现有的 
HDFS,也能够独立运行,但在这两种情况下 ozone 的组件都是相同的。
 
 Review comment:
   如果你喜欢折腾 -> 如果你想要有点挑战性
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2875) Add a config in ozone to tune max outstanding requests in raft client

2020-01-10 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-2875:
--
Description: Add a config to tune the config value of 
"raft.client.async.outstanding-requests.max" config in raft client. 

> Add a config in ozone to tune max outstanding requests in raft client
> -
>
> Key: HDDS-2875
> URL: https://issues.apache.org/jira/browse/HDDS-2875
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.5.0
>
>
> Add a config to tune the config value of 
> "raft.client.async.outstanding-requests.max" config in raft client. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2875) Add a config in ozone to tune max outstanding requests in raft client

2020-01-10 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDDS-2875:
-

 Summary: Add a config in ozone to tune max outstanding requests in 
raft client
 Key: HDDS-2875
 URL: https://issues.apache.org/jira/browse/HDDS-2875
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md translated to Chinese

2020-01-10 Thread GitBox
cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md 
translated to Chinese
URL: https://github.com/apache/hadoop-ozone/pull/429#discussion_r365113176
 
 

 ##
 File path: hadoop-hdds/docs/content/start/OnPrem.zh.md
 ##
 @@ -105,67 +85,67 @@ Here is an  example,
 {{< /highlight >}}
 
 
-## Ozone Settings Summary
+## Ozone 参数汇总
 
 | Setting| Value| Comment |
 
||--|--|
-| ozone.metadata.dirs| file path| The metadata 
will be stored here.|
-| ozone.scm.names| SCM server name  | 
Hostname:port or IP:port address of SCM.  |
-| ozone.scm.block.client.address | SCM server name and port | Used by 
services like OM |
-| ozone.scm.client.address   | SCM server name and port | Used by 
client-side  |
-| ozone.scm.datanode.address | SCM server name and port | Used by 
datanode to talk to SCM  |
-| ozone.om.address   | OM server name   | Used by 
Ozone handler and Ozone file system. |
+| ozone.metadata.dirs| 文件路径| 元数据存储位置   
 |
+| ozone.scm.names| SCM 服务地址| SCM的主机名:端口,或者IP:端口  |
+| ozone.scm.block.client.address | SCM 服务地址和端口 | OM 等服务使用  
   |
+| ozone.scm.client.address   | SCM 服务地址和端口 | 客户端使用 
   |
+| ozone.scm.datanode.address | SCM 服务地址和端口 | 数据节点使用
|
+| ozone.om.address   | OM 服务地址   | Ozone handler 和 Ozone 
文件系统使用 |
 
 
-## Startup the cluster
+## 启动集群
 
-Before we boot up the Ozone cluster, we need to initialize both SCM and Ozone 
Manager.
+在启动 Ozone 集群之前,需要初始化 SCM 和 OM。
 
 {{< highlight bash >}}
 ozone scm --init
 {{< /highlight >}}
-This allows SCM to create the cluster Identity and initialize its state.
-The ```init``` command is similar to Namenode format. Init command is executed 
only once, that allows SCM to create all the required on-disk structures to 
work correctly.
+
+这条命令会使 SCM 创建集群 ID 并初始化它的状态。
 
 Review comment:
   Suggestion: 
   创建集群 ID -> 创建集群 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md translated to Chinese

2020-01-10 Thread GitBox
cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md 
translated to Chinese
URL: https://github.com/apache/hadoop-ozone/pull/429#discussion_r365112726
 
 

 ##
 File path: hadoop-hdds/docs/content/start/OnPrem.zh.md
 ##
 @@ -105,67 +85,67 @@ Here is an  example,
 {{< /highlight >}}
 
 
-## Ozone Settings Summary
+## Ozone 参数汇总
 
 | Setting| Value| Comment |
 
||--|--|
-| ozone.metadata.dirs| file path| The metadata 
will be stored here.|
-| ozone.scm.names| SCM server name  | 
Hostname:port or IP:port address of SCM.  |
-| ozone.scm.block.client.address | SCM server name and port | Used by 
services like OM |
-| ozone.scm.client.address   | SCM server name and port | Used by 
client-side  |
-| ozone.scm.datanode.address | SCM server name and port | Used by 
datanode to talk to SCM  |
-| ozone.om.address   | OM server name   | Used by 
Ozone handler and Ozone file system. |
+| ozone.metadata.dirs| 文件路径| 元数据存储位置   
 |
+| ozone.scm.names| SCM 服务地址| SCM的主机名:端口,或者IP:端口  |
+| ozone.scm.block.client.address | SCM 服务地址和端口 | OM 等服务使用  
   |
+| ozone.scm.client.address   | SCM 服务地址和端口 | 客户端使用 
   |
+| ozone.scm.datanode.address | SCM 服务地址和端口 | 数据节点使用
|
+| ozone.om.address   | OM 服务地址   | Ozone handler 和 Ozone 
文件系统使用 |
 
 
-## Startup the cluster
+## 启动集群
 
-Before we boot up the Ozone cluster, we need to initialize both SCM and Ozone 
Manager.
+在启动 Ozone 集群之前,需要初始化 SCM 和 OM。
 
 {{< highlight bash >}}
 ozone scm --init
 {{< /highlight >}}
-This allows SCM to create the cluster Identity and initialize its state.
-The ```init``` command is similar to Namenode format. Init command is executed 
only once, that allows SCM to create all the required on-disk structures to 
work correctly.
+
+这条命令会使 SCM 创建集群 ID 并初始化它的状态。
+```init``` 命令和 Namenode 的 ```format``` 命令类似,只需要执行一次,SCM 就可以在磁盘上准备好正常运行所需的数据结构。
+
 {{< highlight bash >}}
 ozone --daemon start scm
 {{< /highlight >}}
 
-Once we know SCM is up and running, we can create an Object Store for our use. 
This is done by running the following command.
+SCM 启动之后,我们就可以初始化存储空间,命令如下:
 
 Review comment:
   Suggestion:
   我们就可以初始化存储空间 -> 我们就可以开始创建我们的物件储存集群
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md translated to Chinese

2020-01-10 Thread GitBox
cxorm commented on a change in pull request #429: HDDS-2727. start/OnPrem.md 
translated to Chinese
URL: https://github.com/apache/hadoop-ozone/pull/429#discussion_r365110476
 
 

 ##
 File path: hadoop-hdds/docs/content/start/OnPrem.zh.md
 ##
 @@ -66,37 +51,32 @@ needs to be copied to ```ozone directory/etc/hadoop```.

 {{< /highlight >}}
 
-*  **ozone.scm.names**  Storage container manager(SCM) is a distributed block
-  service which is used by ozone. This property allows data nodes to discover
-   SCM's address. Data nodes send heartbeat to SCM.
-   Until HA  feature is  complete, we configure ozone.scm.names to be a
-   single machine.
-
-  Here is an example,
-
+*  **ozone.scm.names**  Storage container manager(SCM) 是 ozone 
使用的分布式块服务,数据节点通过这个参数来连接 SCM 并向 SCM 发送心跳。在 HA 特性完成之前,我们给 ozone.scm.names 
配置一台机器的地址即可。
 
 Review comment:
   Thanks @iamabug for the work.
   How do you think about using Datanode instead of 数据节点 here ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1564) Ozone multi-raft support

2020-01-10 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012545#comment-17012545
 ] 

Li Cheng commented on HDDS-1564:


All tasks for multi-raft feature are finished. Patch and test brief are 
uploaded in attachments.

> Ozone multi-raft support
> 
>
> Key: HDDS-1564
> URL: https://issues.apache.org/jira/browse/HDDS-1564
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Datanode, SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
> Attachments: Ozone Multi-Raft Support.pdf, multi-raft.patch, 
> multiraft_performance_brief.pdf
>
>
> Apache Ratis supports multi-raft by allowing the same node to be a part of 
> multiple raft groups. The proposal is to allow datanodes to be a part of 
> multiple raft groups. The attached design doc explains the reasons for doing 
> this as well a few initial design decisions. 
> Some of the work in this feature also related to HDDS-700 which implements 
> rack-aware container placement for closed containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1564) Ozone multi-raft support

2020-01-10 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng updated HDDS-1564:
---
Attachment: multi-raft.patch

> Ozone multi-raft support
> 
>
> Key: HDDS-1564
> URL: https://issues.apache.org/jira/browse/HDDS-1564
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Datanode, SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
> Attachments: Ozone Multi-Raft Support.pdf, multi-raft.patch, 
> multiraft_performance_brief.pdf
>
>
> Apache Ratis supports multi-raft by allowing the same node to be a part of 
> multiple raft groups. The proposal is to allow datanodes to be a part of 
> multiple raft groups. The attached design doc explains the reasons for doing 
> this as well a few initial design decisions. 
> Some of the work in this feature also related to HDDS-700 which implements 
> rack-aware container placement for closed containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1564) Ozone multi-raft support

2020-01-10 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng updated HDDS-1564:
---
Attachment: multiraft_performance_brief.pdf

> Ozone multi-raft support
> 
>
> Key: HDDS-1564
> URL: https://issues.apache.org/jira/browse/HDDS-1564
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Datanode, SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
> Attachments: Ozone Multi-Raft Support.pdf, multi-raft.patch, 
> multiraft_performance_brief.pdf
>
>
> Apache Ratis supports multi-raft by allowing the same node to be a part of 
> multiple raft groups. The proposal is to allow datanodes to be a part of 
> multiple raft groups. The attached design doc explains the reasons for doing 
> this as well a few initial design decisions. 
> Some of the work in this feature also related to HDDS-700 which implements 
> rack-aware container placement for closed containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org