[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #546: HDDS-2847. Recon should track containers that are missing along with …

2020-02-11 Thread GitBox
avijayanhwx commented on a change in pull request #546: HDDS-2847. Recon should 
track containers that are missing along with …
URL: https://github.com/apache/hadoop-ozone/pull/546#discussion_r378046515
 
 

 ##
 File path: hadoop-ozone/recon/pom.xml
 ##
 @@ -88,83 +88,83 @@
   2. to install dependencies with yarn install
   3. building the frontend application
   -->
-  
-com.github.eirslett
-frontend-maven-plugin
-1.6
-
-  target
-  
${basedir}/src/main/resources/webapps/recon/ozone-recon-web
-
-
-  
-Install node and yarn locally to the project
-
-  install-node-and-yarn
-
-
-  v12.1.0
-  v1.9.2
-
-  
-  
-yarn install
-
-  yarn
-
-
-  install
-
-  
-  
-Build frontend
-
-  yarn
-
-
-  run build
-
-  
-
-  
-  
-org.apache.maven.plugins
-maven-resources-plugin
-
-  
-Copy frontend build to target
-process-resources
-
-  copy-resources
-
-
-  
${project.build.outputDirectory}/webapps/recon
-  
-
-  
${basedir}/src/main/resources/webapps/recon/ozone-recon-web/build
-  true
-
-  
-
-  
-  
-Copy frontend static files to target
-process-resources
-
-  copy-resources
-
-
-  
${project.build.outputDirectory}/webapps/static
-  
-
-  
${basedir}/src/main/resources/webapps/recon/ozone-recon-web/build/static
-  true
-
-  
-
-  
-
-  
+
 
 Review comment:
   Yes, I had commented out the UI cod for my local build. Removed the change 
now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #546: HDDS-2847. Recon should track containers that are missing along with …

2020-02-11 Thread GitBox
avijayanhwx commented on a change in pull request #546: HDDS-2847. Recon should 
track containers that are missing along with …
URL: https://github.com/apache/hadoop-ozone/pull/546#discussion_r378046515
 
 

 ##
 File path: hadoop-ozone/recon/pom.xml
 ##
 @@ -88,83 +88,83 @@
   2. to install dependencies with yarn install
   3. building the frontend application
   -->
-  
-com.github.eirslett
-frontend-maven-plugin
-1.6
-
-  target
-  
${basedir}/src/main/resources/webapps/recon/ozone-recon-web
-
-
-  
-Install node and yarn locally to the project
-
-  install-node-and-yarn
-
-
-  v12.1.0
-  v1.9.2
-
-  
-  
-yarn install
-
-  yarn
-
-
-  install
-
-  
-  
-Build frontend
-
-  yarn
-
-
-  run build
-
-  
-
-  
-  
-org.apache.maven.plugins
-maven-resources-plugin
-
-  
-Copy frontend build to target
-process-resources
-
-  copy-resources
-
-
-  
${project.build.outputDirectory}/webapps/recon
-  
-
-  
${basedir}/src/main/resources/webapps/recon/ozone-recon-web/build
-  true
-
-  
-
-  
-  
-Copy frontend static files to target
-process-resources
-
-  copy-resources
-
-
-  
${project.build.outputDirectory}/webapps/static
-  
-
-  
${basedir}/src/main/resources/webapps/recon/ozone-recon-web/build/static
-  true
-
-  
-
-  
-
-  
+
 
 Review comment:
   Yes, I had commented out the UI cod for my local buiild. Removed the change 
now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2947) create file : remove key table iterator

2020-02-11 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-2947:

Fix Version/s: 0.5.0

> create file : remove key table iterator
> ---
>
> Key: HDDS-2947
> URL: https://issues.apache.org/jira/browse/HDDS-2947
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Fix For: 0.5.0
>
>
> Remove key table iterator in file create request handler.
> Add test to verify that iterator is not required.
> 1. create file /a/b/c/d/e/f/1.txt
> 2. create file /a/b/c/d --> create should fail, even without the iterator



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2941) file create : create key table entries for intermediate directories in the path

2020-02-11 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-2941:

Target Version/s: 0.5.0

> file create : create key table entries for intermediate directories in the 
> path
> ---
>
> Key: HDDS-2941
> URL: https://issues.apache.org/jira/browse/HDDS-2941
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> similar to and a follow-up pf HDDS-2940
> this change covers the file create request handler in the OM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] jojochuang commented on issue #545: HDDS-3000. Update guava version to 28.2-jre.

2020-02-11 Thread GitBox
jojochuang commented on issue #545: HDDS-3000. Update guava version to 28.2-jre.
URL: https://github.com/apache/hadoop-ozone/pull/545#issuecomment-585023123
 
 
   Looks like the newer Guava versions updated the annotation for the API 
   `void onSuccess(@Nullable V result);`
   
   (it didn't have the `@Nullable` annotation in Gauva 11.) Updated the PR to 
add the nullity check.
   
   See also: https://github.com/spotbugs/spotbugs/issues/633


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2928) Implement ofs://: listStatus

2020-02-11 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2928:
-
Summary: Implement ofs://: listStatus  (was: Implement ofs://: (recursive) 
listStatus)

> Implement ofs://: listStatus
> 
>
> Key: HDDS-2928
> URL: https://issues.apache.org/jira/browse/HDDS-2928
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> ofs:// should be able to handle list (recursive or not) under "root" and 
> under each volume "directory", as if it is a flat filesystem (if the user has 
> permissions to see the volumes).
> This is dependent on HDDS-2840. Will post PR after HDDS-2840 is reviewed and 
> committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-2443) Python client/interface for Ozone

2020-02-11 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao updated HDDS-2443:

Comment: was deleted

(was: [^pyarrow_ozone_test.docx]   Here are the results of the pyarrow 
read-write test)

> Python client/interface for Ozone
> -
>
> Key: HDDS-2443
> URL: https://issues.apache.org/jira/browse/HDDS-2443
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client
>Reporter: Li Cheng
>Priority: Major
> Attachments: Ozone with pyarrow.html, OzoneS3.py, 
> pyarrow_ozone_test.docx
>
>
> This Jira will be used to track development for python client/interface of 
> Ozone.
> Original ideas: item#25 in 
> [https://cwiki.apache.org/confluence/display/HADOOP/Ozone+project+ideas+for+new+contributors]
> Ozone Client(Python) for Data Science Notebook such as Jupyter.
>  # Size: Large
>  # PyArrow: [https://pypi.org/project/pyarrow/]
>  # Python -> libhdfs HDFS JNI library (HDFS, S3,...) -> Java client API 
> Impala uses  libhdfs
> Path to try:
>  # s3 interface: Ozone s3 gateway(already supported) + AWS python client 
> (boto3)
>  # python native RPC
>  # pyarrow + libhdfs, which use the Java client under the hood.
>  # python + C interface of go / rust ozone library. I created POC go / rust 
> clients earlier which can be improved if the libhdfs interface is not good 
> enough. [By [~elek]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on a change in pull request #546: HDDS-2847. Recon should track containers that are missing along with …

2020-02-11 Thread GitBox
vivekratnavel commented on a change in pull request #546: HDDS-2847. Recon 
should track containers that are missing along with …
URL: https://github.com/apache/hadoop-ozone/pull/546#discussion_r377984017
 
 

 ##
 File path: hadoop-ozone/recon/pom.xml
 ##
 @@ -88,83 +88,83 @@
   2. to install dependencies with yarn install
   3. building the frontend application
   -->
-  
-com.github.eirslett
-frontend-maven-plugin
-1.6
-
-  target
-  
${basedir}/src/main/resources/webapps/recon/ozone-recon-web
-
-
-  
-Install node and yarn locally to the project
-
-  install-node-and-yarn
-
-
-  v12.1.0
-  v1.9.2
-
-  
-  
-yarn install
-
-  yarn
-
-
-  install
-
-  
-  
-Build frontend
-
-  yarn
-
-
-  run build
-
-  
-
-  
-  
-org.apache.maven.plugins
-maven-resources-plugin
-
-  
-Copy frontend build to target
-process-resources
-
-  copy-resources
-
-
-  
${project.build.outputDirectory}/webapps/recon
-  
-
-  
${basedir}/src/main/resources/webapps/recon/ozone-recon-web/build
-  true
-
-  
-
-  
-  
-Copy frontend static files to target
-process-resources
-
-  copy-resources
-
-
-  
${project.build.outputDirectory}/webapps/static
-  
-
-  
${basedir}/src/main/resources/webapps/recon/ozone-recon-web/build/static
-  true
-
-  
-
-  
-
-  
+
 
 Review comment:
   Is this block commented out by mistake?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2847) Recon should track containers that are missing along with corresponding keys (FSCK).

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2847:
-
Labels: pull-request-available  (was: )

> Recon should track containers that are missing along with corresponding keys 
> (FSCK).
> 
>
> Key: HDDS-2847
> URL: https://issues.apache.org/jira/browse/HDDS-2847
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx opened a new pull request #546: HDDS-2847. Recon should track containers that are missing along with …

2020-02-11 Thread GitBox
avijayanhwx opened a new pull request #546: HDDS-2847. Recon should track 
containers that are missing along with …
URL: https://github.com/apache/hadoop-ozone/pull/546
 
 
   …corresponding keys (FSCK).
   
   ## What changes were proposed in this pull request?
   Recon should track the list of containers that have no replicas in its own 
SQL DB. This information will be used to serve the Missing containers endpoint 
that returns the list of containers missing along with keys that were part of 
it. 
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2847
   
   
   ## How was this patch tested?
   Unit tested.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2790) concept/Overview.md translation

2020-02-11 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-2790.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

Thanks [~iamabug] for the contribution. I've merged the change to master. 

> concept/Overview.md translation
> ---
>
> Key: HDDS-2790
> URL: https://issues.apache.org/jira/browse/HDDS-2790
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiang Zhang
>Assignee: Xiang Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #497: HDDS-2790. concept/Overview.md translation

2020-02-11 Thread GitBox
xiaoyuyao merged pull request #497: HDDS-2790. concept/Overview.md translation
URL: https://github.com/apache/hadoop-ozone/pull/497
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2982) Recon server start failed with CNF exception in a cluster with Auto TLS enabled.

2020-02-11 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved HDDS-2982.
-
Resolution: Invalid

Explanation in Github PR.

> Recon server start failed with CNF exception in a cluster with Auto TLS 
> enabled.
> 
>
> Key: HDDS-2982
> URL: https://issues.apache.org/jira/browse/HDDS-2982
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Yesha Vora
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> Caused by: java.lang.NoClassDefFoundError: 
> org/eclipse/jetty/util/ssl/SslContextFactory$Server
> at 
> org.apache.hadoop.hdfs.DFSUtil.httpServerTemplateForNNAndJN(DFSUtil.java:1590)
> at 
> org.apache.hadoop.hdds.server.BaseHttpServer.(BaseHttpServer.java:83)
> at 
> org.apache.hadoop.ozone.recon.ReconHttpServer.(ReconHttpServer.java:35)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> at 
> com.google.inject.internal.DefaultConstructionProxyFactory$2.newInstance(DefaultConstructionProxyFactory.java:86)
> at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:105)
> at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
> at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:267)
> at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
> at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1103)
> at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
> at 
> com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:145)
> at 
> com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
> at 
> com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1016)
> at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> at 
> com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1012)
> ... 13 more
> Caused by: java.lang.ClassNotFoundException: 
> org.eclipse.jetty.util.ssl.SslContextFactory$Server
> at 
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
> at 
> java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
> ... 32 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2188) Implement LocatedFileStatus & getFileBlockLocations to provide node/localization information to Yarn/Mapreduce

2020-02-11 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved HDDS-2188.
-
Resolution: Fixed

Fixed with https://github.com/apache/hadoop-ozone/pull/540.

> Implement LocatedFileStatus & getFileBlockLocations to provide 
> node/localization information to Yarn/Mapreduce
> --
>
> Key: HDDS-2188
> URL: https://issues.apache.org/jira/browse/HDDS-2188
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> For applications like Hive/MapReduce to take advantage of the data locality 
> in Ozone, Ozone should return the location of the Ozone blocks. This is 
> needed for better read performance for Hadoop Applications.
> {code}
> if (file instanceof LocatedFileStatus) {
>   blkLocations = ((LocatedFileStatus) file).getBlockLocations();
> } else {
>   blkLocations = fs.getFileBlockLocations(file, 0, length);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2914) Certain Hive queries started to fail on generating splits

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2914:
-
Labels: pull-request-available  (was: )

> Certain Hive queries started to fail on generating splits
> -
>
> Key: HDDS-2914
> URL: https://issues.apache.org/jira/browse/HDDS-2914
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>
> After updating a cluster where I am running TPCDS queries, some queries 
> started to fail.
> The update happened from an early dec state to the jan 10 state of master.
> Most likely the addition of HDDS-2188 is related to the problem, but it is 
> still under investigation.
> The exception I see in the queries:
> {code}
> [ERROR] [Dispatcher thread {Central}] |impl.VertexImpl|: Vertex Input: 
> inventory initializer failed, vertex=vertex_ [Map 1]
> org.apache.tez.dag.app.dag.impl.AMUserCodeException: 
> java.lang.RuntimeException: ORC split generation failed with exception: 
> java.io.IOException: File 
> o3fs://hive.warehouse.fqdn:9862/warehouse/tablespace/managed/hive/100/inventory/delta_001_001_/bucket_0
>  should have had overlap on block starting at 0
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallback.onFailure(RootInputInitializerManager.java:328)
>   at com.google.common.util.concurrent.Futures$6.run(Futures.java:1764)
>   at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:456)
>   at 
> com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:817)
>   at 
> com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:753)
>   at 
> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:634)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:110)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: ORC split generation failed with 
> exception: java.io.IOException: File 
> o3fs://hive.warehouse.fqdn:9862/warehouse/tablespace/managed/hive/100/inventory/delta_001_001_/bucket_0
>  should have had overlap on block starting at 0
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1915)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:2002)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:532)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:789)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   ... 5 more
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: File 
> o3fs://hive.warehouse.fqdn:9862/warehouse/tablespace/managed/hive/100/inventory/delta_001_001_/bucket_0
>  should have had overlap on block starting at 0
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1909)
> 

[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #540: HDDS-2914. Certain Hive queries started to fail on generating splits

2020-02-11 Thread GitBox
bharatviswa504 merged pull request #540: HDDS-2914. Certain Hive queries 
started to fail on generating splits
URL: https://github.com/apache/hadoop-ozone/pull/540
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2914) Certain Hive queries started to fail on generating splits

2020-02-11 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2914.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Certain Hive queries started to fail on generating splits
> -
>
> Key: HDDS-2914
> URL: https://issues.apache.org/jira/browse/HDDS-2914
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After updating a cluster where I am running TPCDS queries, some queries 
> started to fail.
> The update happened from an early dec state to the jan 10 state of master.
> Most likely the addition of HDDS-2188 is related to the problem, but it is 
> still under investigation.
> The exception I see in the queries:
> {code}
> [ERROR] [Dispatcher thread {Central}] |impl.VertexImpl|: Vertex Input: 
> inventory initializer failed, vertex=vertex_ [Map 1]
> org.apache.tez.dag.app.dag.impl.AMUserCodeException: 
> java.lang.RuntimeException: ORC split generation failed with exception: 
> java.io.IOException: File 
> o3fs://hive.warehouse.fqdn:9862/warehouse/tablespace/managed/hive/100/inventory/delta_001_001_/bucket_0
>  should have had overlap on block starting at 0
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallback.onFailure(RootInputInitializerManager.java:328)
>   at com.google.common.util.concurrent.Futures$6.run(Futures.java:1764)
>   at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:456)
>   at 
> com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:817)
>   at 
> com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:753)
>   at 
> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:634)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:110)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: ORC split generation failed with 
> exception: java.io.IOException: File 
> o3fs://hive.warehouse.fqdn:9862/warehouse/tablespace/managed/hive/100/inventory/delta_001_001_/bucket_0
>  should have had overlap on block starting at 0
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1915)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:2002)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:532)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:789)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   ... 5 more
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: File 
> o3fs://hive.warehouse.fqdn:9862/warehouse/tablespace/managed/hive/100/inventory/delta_001_001_/bucket_0
>  should have had overlap on block starting at 0
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   

[jira] [Created] (HDDS-3003) Add NFS Filehandle support in OM

2020-02-11 Thread Prashant Pogde (Jira)
Prashant Pogde created HDDS-3003:


 Summary: Add NFS Filehandle support in OM
 Key: HDDS-3003
 URL: https://issues.apache.org/jira/browse/HDDS-3003
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Prashant Pogde






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3002) Make the Mountd working for Ozone

2020-02-11 Thread Prashant Pogde (Jira)
Prashant Pogde created HDDS-3002:


 Summary: Make the Mountd working for Ozone
 Key: HDDS-3002
 URL: https://issues.apache.org/jira/browse/HDDS-3002
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Prashant Pogde
Assignee: Prashant Pogde






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3001) NFS support for Ozone

2020-02-11 Thread Prashant Pogde (Jira)
Prashant Pogde created HDDS-3001:


 Summary: NFS support for Ozone
 Key: HDDS-3001
 URL: https://issues.apache.org/jira/browse/HDDS-3001
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
  Components: Ozone Filesystem
Affects Versions: 0.5.0
Reporter: Prashant Pogde
Assignee: Prashant Pogde
 Fix For: 0.5.0


Provide NFS support for Ozone



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3001) NFS support for Ozone

2020-02-11 Thread Prashant Pogde (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prashant Pogde updated HDDS-3001:
-
Issue Type: Task  (was: New Feature)

> NFS support for Ozone
> -
>
> Key: HDDS-3001
> URL: https://issues.apache.org/jira/browse/HDDS-3001
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Filesystem
>Affects Versions: 0.5.0
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
> Fix For: 0.5.0
>
>
> Provide NFS support for Ozone



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2962) Handle replay of OM Prefix ACL requests

2020-02-11 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-2962.
-
Fix Version/s: 0.5.0
   Resolution: Fixed

+1

Committed via Gerrit. Thanks for the contribution [~hanishakoneru], and thanks 
[~bharat] for the review.

> Handle replay of OM Prefix ACL requests
> ---
>
> Key: HDDS-2962
> URL: https://issues.apache.org/jira/browse/HDDS-2962
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> To ensure that Prefix acl operations are idempotent, compare the 
> transactionID with the objectID and updateID to make sure that the 
> transaction is not a replay. If the transactionID <= updateID, then it 
> implies that the transaction is a replay and hence it should be skipped.
> OMPrefixAclRequests (Add, Remove and Set ACL requests) are made idempotent in 
> this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 merged pull request #513: HDDS-2962. Handle replay of OM Prefix ACL requests

2020-02-11 Thread GitBox
arp7 merged pull request #513: HDDS-2962. Handle replay of OM Prefix ACL 
requests
URL: https://github.com/apache/hadoop-ozone/pull/513
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3000) Update guava version to 28.2-jre

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3000:
-
Labels: pull-request-available  (was: )

> Update guava version to 28.2-jre
> 
>
> Key: HDDS-3000
> URL: https://issues.apache.org/jira/browse/HDDS-3000
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] jojochuang opened a new pull request #545: HDDS-3000. Update guava version to 28.2-jre.

2020-02-11 Thread GitBox
jojochuang opened a new pull request #545: HDDS-3000. Update guava version to 
28.2-jre.
URL: https://github.com/apache/hadoop-ozone/pull/545
 
 
   ## What changes were proposed in this pull request?
   Update Ozone's guava dependency to the latest.
   (Please fill in changes proposed in this fix)
   
   ## What is the link to the Apache JIRA
   
   (Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HDDS-. Fix a typo in YYY.)
   
   Please replace this section with the link to the Apache JIRA)
   
   ## How was this patch tested?
   
   (Please explain how this patch was tested. Ex: unit tests, manual tests)
   (If this patch involves UI changes, please attach a screen-shot; otherwise, 
remove this)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3000) Update guava version to 28.2-jre

2020-02-11 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDDS-3000:
-

Assignee: Wei-Chiu Chuang

> Update guava version to 28.2-jre
> 
>
> Key: HDDS-3000
> URL: https://issues.apache.org/jira/browse/HDDS-3000
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3000) Update guava version to 28.2-jre

2020-02-11 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDDS-3000:
-

 Summary: Update guava version to 28.2-jre
 Key: HDDS-3000
 URL: https://issues.apache.org/jira/browse/HDDS-3000
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2891) Apache NiFi PutFile processor is failing with secure Ozone S3G

2020-02-11 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2891.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Apache NiFi PutFile processor is failing with secure Ozone S3G
> --
>
> Key: HDDS-2891
> URL: https://issues.apache.org/jira/browse/HDDS-2891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ifigeneia Derekli
>Assignee: Marton Elek
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
>  
> (1) Create a simple PutS3Object processor in NiFi
> (2) The request from NiFi to S3g will fail with HTTP 500
> (3) The exception in the s3g log:
>  
> {code:java}
>  s3g_1   | Caused by: java.io.IOException: Couldn't create RpcClient 
> protocol
> s3g_1   | at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:197)
> s3g_1   | at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:173)
> s3g_1   | at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClient(OzoneClientFactory.java:74)
> s3g_1   | at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.getClient(OzoneClientProducer.java:114)
> s3g_1   | at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.createClient(OzoneClientProducer.java:71)
> s3g_1   | at 
> jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> s3g_1   | at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> s3g_1   | at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> s3g_1   | at 
> org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
> s3g_1   | ... 92 more
> s3g_1   | Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  Invalid S3 identifier:OzoneToken owner=testuser/s...@example.com, renewer=, 
> realUser=, issueDate=0, maxDate=0, sequenceNumber=0, masterKeyId=0, 
> strToSign=AWS4-HMAC-SHA256
> s3g_1   | 20200115T101329Z
> s3g_1   | 20200115/us-east-1/s3/aws4_request
> s3g_1   | (hash), signature=(sign), 
> awsAccessKeyId=testuser/s...@example.com{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #449: HDDS-2891. Apache NiFi PutFile processor is failing with secure Ozone S3G

2020-02-11 Thread GitBox
bharatviswa504 merged pull request #449: HDDS-2891. Apache NiFi PutFile 
processor is failing with secure Ozone S3G
URL: https://github.com/apache/hadoop-ozone/pull/449
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #449: HDDS-2891. Apache NiFi PutFile processor is failing with secure Ozone S3G

2020-02-11 Thread GitBox
bharatviswa504 commented on a change in pull request #449: HDDS-2891. Apache 
NiFi PutFile processor is failing with secure Ozone S3G
URL: https://github.com/apache/hadoop-ozone/pull/449#discussion_r377803827
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/OzoneS3Util.java
 ##
 @@ -38,9 +38,9 @@
   private OzoneS3Util() {
   }
 
-  public static String getVolumeName(String userName) {
+  public static String getS3Username(String userName) {
 
 Review comment:
   Agreed. I am fine with current way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #518: HDDS-2791. concept/OzoneManager.md translation

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #518: HDDS-2791. 
concept/OzoneManager.md translation
URL: https://github.com/apache/hadoop-ozone/pull/518#discussion_r377802041
 
 

 ##
 File path: hadoop-hdds/docs/content/concept/OzoneManager.zh.md
 ##
 @@ -0,0 +1,64 @@
+---
+title: "Ozone Manager"
+date: "2017-09-14"
+weight: 2
+summary: Ozone Manager 是 Ozone 主要的命名空间服务,它管理了卷、桶和键的生命周期。
+---
+
+
+Ozone Manager(OM)管理 Ozone 的命名空间。
+
+当向 Ozone 写入数据时,你需要向 OM 请求一个块,OM 会返回一个块并记录下相关信息。当你想要读取那个文件时,你也需要先通过 OM 获取那个块的地址。
+
+OM 允许用户在卷和桶下管理键,卷和桶都是命名空间的一部分,也由 OM 管理。
+
+每个卷都是一个独立的命名空间的根元素,这一点和 HDFS 不同,HDFS 提供的是单个根元素的文件系统。
+
+Ozone 的命名空间是卷的集合,与 HDFS 中单根的树状结构相比,可以看作是个森林,因此可以非常容易地部署多个 OM 来进行扩展。
+
+## Ozone Manager 元数据
+
+OM 维护了卷、桶和键的列表。它为每个用户维护卷的列表,为每个卷维护桶的列表,为每个桶维护键的列表。
+
+OM 使用 Apache Ratis(Raft 协议的一种实现)来复制 OM 的状态,这为 Ozone 提供了高可用性保证。
+
+
+## Ozone Manager 和 Storage Container Manager
+
+为了方便理解 OM 和 SCM 之间的关系,我们来看看写入键和读取键的过程。
+
+### 写入键
+
+* 为了向 Ozone 中的某个卷下的某个桶的某个键写入数据,用户需要先向 OM 发起写请求,OM 会判断该用户是否有权限写入该键,如果有权限,OM 
需要分配给用户一个块用于写数据。
+
+* 要获得一个块,OM 需要向 SCM 发送请求(SCM 是数据节点的管理者),SCM 选择三个数据节点,创建新块并向 OM 返回块 ID。
+
+* OM 在自己的元数据中记录下块的信息,然后将块和块 token(带有向该块写数据的权限)返回给用户。
+
+* 用户使用块 token 证明自己有权限向该块写入数据,并向对应的数据节点写入数据。
+
+* 数据写入完成后,用户会更新该块在 OM 中的信息。
+
+
+### 读取键
+
+* 键读取相对比较简单,用户首先向 OM 请求块列表。
 
 Review comment:
   请求块列表=》请求该键的块列表


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #518: HDDS-2791. concept/OzoneManager.md translation

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #518: HDDS-2791. 
concept/OzoneManager.md translation
URL: https://github.com/apache/hadoop-ozone/pull/518#discussion_r377801273
 
 

 ##
 File path: hadoop-hdds/docs/content/concept/OzoneManager.zh.md
 ##
 @@ -0,0 +1,64 @@
+---
+title: "Ozone Manager"
+date: "2017-09-14"
+weight: 2
+summary: Ozone Manager 是 Ozone 主要的命名空间服务,它管理了卷、桶和键的生命周期。
+---
+
+
+Ozone Manager(OM)管理 Ozone 的命名空间。
+
+当向 Ozone 写入数据时,你需要向 OM 请求一个块,OM 会返回一个块并记录下相关信息。当你想要读取那个文件时,你也需要先通过 OM 获取那个块的地址。
+
+OM 允许用户在卷和桶下管理键,卷和桶都是命名空间的一部分,也由 OM 管理。
+
+每个卷都是一个独立的命名空间的根元素,这一点和 HDFS 不同,HDFS 提供的是单个根元素的文件系统。
+
+Ozone 的命名空间是卷的集合,与 HDFS 中单根的树状结构相比,可以看作是个森林,因此可以非常容易地部署多个 OM 来进行扩展。
+
+## Ozone Manager 元数据
+
+OM 维护了卷、桶和键的列表。它为每个用户维护卷的列表,为每个卷维护桶的列表,为每个桶维护键的列表。
+
+OM 使用 Apache Ratis(Raft 协议的一种实现)来复制 OM 的状态,这为 Ozone 提供了高可用性保证。
+
+
+## Ozone Manager 和 Storage Container Manager
+
+为了方便理解 OM 和 SCM 之间的关系,我们来看看写入键和读取键的过程。
+
+### 写入键
+
+* 为了向 Ozone 中的某个卷下的某个桶的某个键写入数据,用户需要先向 OM 发起写请求,OM 会判断该用户是否有权限写入该键,如果有权限,OM 
需要分配给用户一个块用于写数据。
+
+* 要获得一个块,OM 需要向 SCM 发送请求(SCM 是数据节点的管理者),SCM 选择三个数据节点,创建新块并向 OM 返回块 ID。
+
+* OM 在自己的元数据中记录下块的信息,然后将块和块 token(带有向该块写数据的权限)返回给用户。
 
 Review comment:
   权限=》授权


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #518: HDDS-2791. concept/OzoneManager.md translation

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #518: HDDS-2791. 
concept/OzoneManager.md translation
URL: https://github.com/apache/hadoop-ozone/pull/518#discussion_r377799520
 
 

 ##
 File path: hadoop-hdds/docs/content/concept/OzoneManager.zh.md
 ##
 @@ -0,0 +1,64 @@
+---
+title: "Ozone Manager"
+date: "2017-09-14"
+weight: 2
+summary: Ozone Manager 是 Ozone 主要的命名空间服务,它管理了卷、桶和键的生命周期。
+---
+
+
+Ozone Manager(OM)管理 Ozone 的命名空间。
+
+当向 Ozone 写入数据时,你需要向 OM 请求一个块,OM 会返回一个块并记录下相关信息。当你想要读取那个文件时,你也需要先通过 OM 获取那个块的地址。
+
+OM 允许用户在卷和桶下管理键,卷和桶都是命名空间的一部分,也由 OM 管理。
+
+每个卷都是一个独立的命名空间的根元素,这一点和 HDFS 不同,HDFS 提供的是单个根元素的文件系统。
+
+Ozone 的命名空间是卷的集合,与 HDFS 中单根的树状结构相比,可以看作是个森林,因此可以非常容易地部署多个 OM 来进行扩展。
+
+## Ozone Manager 元数据
+
+OM 维护了卷、桶和键的列表。它为每个用户维护卷的列表,为每个卷维护桶的列表,为每个桶维护键的列表。
+
+OM 使用 Apache Ratis(Raft 协议的一种实现)来复制 OM 的状态,这为 Ozone 提供了高可用性保证。
+
+
+## Ozone Manager 和 Storage Container Manager
+
+为了方便理解 OM 和 SCM 之间的关系,我们来看看写入键和读取键的过程。
+
+### 写入键
+
+* 为了向 Ozone 中的某个卷下的某个桶的某个键写入数据,用户需要先向 OM 发起写请求,OM 会判断该用户是否有权限写入该键,如果有权限,OM 
需要分配给用户一个块用于写数据。
+
+* 要获得一个块,OM 需要向 SCM 发送请求(SCM 是数据节点的管理者),SCM 选择三个数据节点,创建新块并向 OM 返回块 ID。
 
 Review comment:
   创建新块并向 OM 返回块 ID=》分配新块并向 OM 返回块 标识。


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #518: HDDS-2791. concept/OzoneManager.md translation

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #518: HDDS-2791. 
concept/OzoneManager.md translation
URL: https://github.com/apache/hadoop-ozone/pull/518#discussion_r377799520
 
 

 ##
 File path: hadoop-hdds/docs/content/concept/OzoneManager.zh.md
 ##
 @@ -0,0 +1,64 @@
+---
+title: "Ozone Manager"
+date: "2017-09-14"
+weight: 2
+summary: Ozone Manager 是 Ozone 主要的命名空间服务,它管理了卷、桶和键的生命周期。
+---
+
+
+Ozone Manager(OM)管理 Ozone 的命名空间。
+
+当向 Ozone 写入数据时,你需要向 OM 请求一个块,OM 会返回一个块并记录下相关信息。当你想要读取那个文件时,你也需要先通过 OM 获取那个块的地址。
+
+OM 允许用户在卷和桶下管理键,卷和桶都是命名空间的一部分,也由 OM 管理。
+
+每个卷都是一个独立的命名空间的根元素,这一点和 HDFS 不同,HDFS 提供的是单个根元素的文件系统。
+
+Ozone 的命名空间是卷的集合,与 HDFS 中单根的树状结构相比,可以看作是个森林,因此可以非常容易地部署多个 OM 来进行扩展。
+
+## Ozone Manager 元数据
+
+OM 维护了卷、桶和键的列表。它为每个用户维护卷的列表,为每个卷维护桶的列表,为每个桶维护键的列表。
+
+OM 使用 Apache Ratis(Raft 协议的一种实现)来复制 OM 的状态,这为 Ozone 提供了高可用性保证。
+
+
+## Ozone Manager 和 Storage Container Manager
+
+为了方便理解 OM 和 SCM 之间的关系,我们来看看写入键和读取键的过程。
+
+### 写入键
+
+* 为了向 Ozone 中的某个卷下的某个桶的某个键写入数据,用户需要先向 OM 发起写请求,OM 会判断该用户是否有权限写入该键,如果有权限,OM 
需要分配给用户一个块用于写数据。
+
+* 要获得一个块,OM 需要向 SCM 发送请求(SCM 是数据节点的管理者),SCM 选择三个数据节点,创建新块并向 OM 返回块 ID。
 
 Review comment:
   创建新块=》分配新块


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #518: HDDS-2791. concept/OzoneManager.md translation

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #518: HDDS-2791. 
concept/OzoneManager.md translation
URL: https://github.com/apache/hadoop-ozone/pull/518#discussion_r377798973
 
 

 ##
 File path: hadoop-hdds/docs/content/concept/OzoneManager.zh.md
 ##
 @@ -0,0 +1,64 @@
+---
+title: "Ozone Manager"
+date: "2017-09-14"
+weight: 2
+summary: Ozone Manager 是 Ozone 主要的命名空间服务,它管理了卷、桶和键的生命周期。
+---
+
+
+Ozone Manager(OM)管理 Ozone 的命名空间。
+
+当向 Ozone 写入数据时,你需要向 OM 请求一个块,OM 会返回一个块并记录下相关信息。当你想要读取那个文件时,你也需要先通过 OM 获取那个块的地址。
+
+OM 允许用户在卷和桶下管理键,卷和桶都是命名空间的一部分,也由 OM 管理。
+
+每个卷都是一个独立的命名空间的根元素,这一点和 HDFS 不同,HDFS 提供的是单个根元素的文件系统。
+
+Ozone 的命名空间是卷的集合,与 HDFS 中单根的树状结构相比,可以看作是个森林,因此可以非常容易地部署多个 OM 来进行扩展。
+
+## Ozone Manager 元数据
+
+OM 维护了卷、桶和键的列表。它为每个用户维护卷的列表,为每个卷维护桶的列表,为每个桶维护键的列表。
+
+OM 使用 Apache Ratis(Raft 协议的一种实现)来复制 OM 的状态,这为 Ozone 提供了高可用性保证。
+
+
+## Ozone Manager 和 Storage Container Manager
+
+为了方便理解 OM 和 SCM 之间的关系,我们来看看写入键和读取键的过程。
+
+### 写入键
+
+* 为了向 Ozone 中的某个卷下的某个桶的某个键写入数据,用户需要先向 OM 发起写请求,OM 会判断该用户是否有权限写入该键,如果有权限,OM 
需要分配给用户一个块用于写数据。
+
+* 要获得一个块,OM 需要向 SCM 发送请求(SCM 是数据节点的管理者),SCM 选择三个数据节点,创建新块并向 OM 返回块 ID。
 
 Review comment:
   要获得一个块,OM 需要向 SCM 发送请求=》OM通过SCM请求分配一个块。


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #518: HDDS-2791. concept/OzoneManager.md translation

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #518: HDDS-2791. 
concept/OzoneManager.md translation
URL: https://github.com/apache/hadoop-ozone/pull/518#discussion_r377797493
 
 

 ##
 File path: hadoop-hdds/docs/content/concept/OzoneManager.zh.md
 ##
 @@ -0,0 +1,64 @@
+---
+title: "Ozone Manager"
+date: "2017-09-14"
+weight: 2
+summary: Ozone Manager 是 Ozone 主要的命名空间服务,它管理了卷、桶和键的生命周期。
+---
+
+
+Ozone Manager(OM)管理 Ozone 的命名空间。
+
+当向 Ozone 写入数据时,你需要向 OM 请求一个块,OM 会返回一个块并记录下相关信息。当你想要读取那个文件时,你也需要先通过 OM 获取那个块的地址。
+
+OM 允许用户在卷和桶下管理键,卷和桶都是命名空间的一部分,也由 OM 管理。
+
+每个卷都是一个独立的命名空间的根元素,这一点和 HDFS 不同,HDFS 提供的是单个根元素的文件系统。
+
+Ozone 的命名空间是卷的集合,与 HDFS 中单根的树状结构相比,可以看作是个森林,因此可以非常容易地部署多个 OM 来进行扩展。
+
+## Ozone Manager 元数据
+
+OM 维护了卷、桶和键的列表。它为每个用户维护卷的列表,为每个卷维护桶的列表,为每个桶维护键的列表。
+
+OM 使用 Apache Ratis(Raft 协议的一种实现)来复制 OM 的状态,这为 Ozone 提供了高可用性保证。
+
+
+## Ozone Manager 和 Storage Container Manager
+
+为了方便理解 OM 和 SCM 之间的关系,我们来看看写入键和读取键的过程。
+
+### 写入键
+
+* 为了向 Ozone 中的某个卷下的某个桶的某个键写入数据,用户需要先向 OM 发起写请求,OM 会判断该用户是否有权限写入该键,如果有权限,OM 
需要分配给用户一个块用于写数据。
 
 Review comment:
   如果有权限,OM 需要分配给用户一个块用于写数据。=》如果权限许可,OM分配一个块用于ozone客户端数据写入。


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #518: HDDS-2791. concept/OzoneManager.md translation

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #518: HDDS-2791. 
concept/OzoneManager.md translation
URL: https://github.com/apache/hadoop-ozone/pull/518#discussion_r377795301
 
 

 ##
 File path: hadoop-hdds/docs/content/concept/OzoneManager.zh.md
 ##
 @@ -0,0 +1,64 @@
+---
+title: "Ozone Manager"
+date: "2017-09-14"
+weight: 2
+summary: Ozone Manager 是 Ozone 主要的命名空间服务,它管理了卷、桶和键的生命周期。
+---
+
+
+Ozone Manager(OM)管理 Ozone 的命名空间。
+
+当向 Ozone 写入数据时,你需要向 OM 请求一个块,OM 会返回一个块并记录下相关信息。当你想要读取那个文件时,你也需要先通过 OM 获取那个块的地址。
+
+OM 允许用户在卷和桶下管理键,卷和桶都是命名空间的一部分,也由 OM 管理。
+
+每个卷都是一个独立的命名空间的根元素,这一点和 HDFS 不同,HDFS 提供的是单个根元素的文件系统。
+
+Ozone 的命名空间是卷的集合,与 HDFS 中单根的树状结构相比,可以看作是个森林,因此可以非常容易地部署多个 OM 来进行扩展。
+
+## Ozone Manager 元数据
+
+OM 维护了卷、桶和键的列表。它为每个用户维护卷的列表,为每个卷维护桶的列表,为每个桶维护键的列表。
+
+OM 使用 Apache Ratis(Raft 协议的一种实现)来复制 OM 的状态,这为 Ozone 提供了高可用性保证。
+
+
+## Ozone Manager 和 Storage Container Manager
+
+为了方便理解 OM 和 SCM 之间的关系,我们来看看写入键和读取键的过程。
+
+### 写入键
+
+* 为了向 Ozone 中的某个卷下的某个桶的某个键写入数据,用户需要先向 OM 发起写请求,OM 会判断该用户是否有权限写入该键,如果有权限,OM 
需要分配给用户一个块用于写数据。
 
 Review comment:
   用户=>Ozone客户端


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on issue #536: HDDS-2988. Use getPropertiesByPrefix instead of regex in matching ratis client and server properties.

2020-02-11 Thread GitBox
bharatviswa504 edited a comment on issue #536: HDDS-2988. Use 
getPropertiesByPrefix instead of regex in matching ratis client and server 
properties.
URL: https://github.com/apache/hadoop-ozone/pull/536#issuecomment-584742986
 
 
   > > The reason for server, client and grpc prefix is to group the 
configurations by the group
   > 
   > Thanks the answer. I understand that grouping is useful. I don't 
understand why do we need double grouping.
   > 
   > > Do we really need the server and client part here? Why don't we use 
`datanode.ratis.raft.client.*` and `datanode.ratis.raft.server.*` instead of 
`datanode.ratis.server.raft.server`? Is there any use case where we need to 
configure the client configs on the server side? 
(`datanode.ratis.server.raft.client.*` for example?)
   > 
   > If we already have a grouping on the ratis side why do we introduce an 
other one on the ozone side?
   
   The reasons for a prefix "datanode" is in future if OM also uses this 
similar approach, we need a way to distinguish which config is for which 
component. So, that is the reason for prefix. So, created 3 config classes, one 
for ratis server with datanode.ratis.server, similarly for others. I understand 
the ratis.server is duplicated again with this approach. If any other thoughts 
how to handle this?
   
Main reasons for doing this way.
   1. Distinguish these properties for each component. (As OM also uses OM 
ratis Server, sooner or later, we might do similar work for OM also)
   
   One way I think we can remove this is.
   
So, thinking more again, we can use  "datanode.raft.server" for RatisServer 
config and for client "datanode.raft.client" and for grpc "datanode.raft.grpc". 
In this way we can use directly ratis grouping. Is this is what you are 
suggesting here?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #536: HDDS-2988. Use getPropertiesByPrefix instead of regex in matching ratis client and server properties.

2020-02-11 Thread GitBox
bharatviswa504 commented on issue #536: HDDS-2988. Use getPropertiesByPrefix 
instead of regex in matching ratis client and server properties.
URL: https://github.com/apache/hadoop-ozone/pull/536#issuecomment-584742986
 
 
   > > The reason for server, client and grpc prefix is to group the 
configurations by the group
   > 
   > Thanks the answer. I understand that grouping is useful. I don't 
understand why do we need double grouping.
   > 
   > > Do we really need the server and client part here? Why don't we use 
`datanode.ratis.raft.client.*` and `datanode.ratis.raft.server.*` instead of 
`datanode.ratis.server.raft.server`? Is there any use case where we need to 
configure the client configs on the server side? 
(`datanode.ratis.server.raft.client.*` for example?)
   > 
   > If we already have a grouping on the ratis side why do we introduce an 
other one on the ozone side?
   
   The reasons for a prefix "datanode" is in future if OM also uses this 
similar approach, we need a way to distinguish which config is for which 
component. So, that is the reason for prefix. So, created 3 config classes, one 
for ratis server with datanode.ratis.server, similarly for others. I understand 
the ratis.server is duplicated again with this approach. If any other thoughts 
how to handle this?
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek closed pull request #534: HDDS-2987. Add metrics to OM DoubleBuffer.

2020-02-11 Thread GitBox
elek closed pull request #534: HDDS-2987. Add metrics to OM DoubleBuffer.
URL: https://github.com/apache/hadoop-ozone/pull/534
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2987) Add metrics to OM DoubleBuffer

2020-02-11 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-2987:
--
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add metrics to OM DoubleBuffer
> --
>
> Key: HDDS-2987
> URL: https://issues.apache.org/jira/browse/HDDS-2987
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add a metric that will help in understanding the Average flush time which 
> will help in understanding how the flush time increases over time.
> Add another metric to show the average number of flush transactions in an 
> iteration. This will show how many transactions are flushed in a single 
> iteration over time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #534: HDDS-2987. Add metrics to OM DoubleBuffer.

2020-02-11 Thread GitBox
elek commented on issue #534: HDDS-2987. Add metrics to OM DoubleBuffer.
URL: https://github.com/apache/hadoop-ozone/pull/534#issuecomment-584673375
 
 
   > let me know your thoughts
   
   I misunderstood the average calculation previously therefore I used the 
wrong examples. This is a better one:
   
   Let's say you have 100 day uptime and at each of day you had 2_000_000 flush 
transactions with 1_000_000 flush operations. The 
avgFlushTransactionsInOneIteration is 2_000_000 * 100 / 1_000_000 * 100 = 2.0
   
   The 101st day is a very bad day as you have 2_000_000 transaction per day 
but with 1_500_000 operations. This is 50% bigger than before, but from the 
average it's very hard to notice: 2_000_000 * 100 + 2_000_000) / (1_000_000 * 
100 + 1_500_000) = 1.99...
   
   I think a generic average (which is calculated based on **all the events 
from the start-up**) is not so effective. It is very expressive at the 
beginning but later it couldn't show real problems.
   
   With real moving averages (which can be easily calculated by prometheus or 
any other monitoring system) you can see the average of the **last day** or any 
period. 
   
   But this is not a blocking problem. I don't think that the average of this 
patch is a very useful metric for production but it might be useful for your 
performance tests (where you are interested only about the few hours after the 
startup). (But you can always use prometheus to calculate moving averages...)
   
   Long story short: I will merge it now... Thanks the patch...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on a change in pull request #534: HDDS-2987. Add metrics to OM DoubleBuffer.

2020-02-11 Thread GitBox
elek commented on a change in pull request #534: HDDS-2987. Add metrics to OM 
DoubleBuffer.
URL: https://github.com/apache/hadoop-ozone/pull/534#discussion_r377671317
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -145,7 +146,10 @@ private void flushTransactions() {
   }
 });
 
+long startTime = Time.monotonicNowNanos();
 omMetadataManager.getStore().commitBatchOperation(batchOperation);
+ozoneManagerDoubleBufferMetrics.updateFlushTime(
 
 Review comment:
   > think that is fine right because anyway we got an exception, so missing 
updateFlushTime is fine I believe.
   
   Fair enough. Thanks to help to understand it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #536: HDDS-2988. Use getPropertiesByPrefix instead of regex in matching ratis client and server properties.

2020-02-11 Thread GitBox
elek commented on issue #536: HDDS-2988. Use getPropertiesByPrefix instead of 
regex in matching ratis client and server properties.
URL: https://github.com/apache/hadoop-ozone/pull/536#issuecomment-584662251
 
 
   > The reason for server, client and grpc prefix is to group the 
configurations by the group
   
   Thanks the answer. I understand that grouping is useful. I don't understand 
why do we need double grouping. 
   
   > Do we really need the server and client part here? Why don't we use 
`datanode.ratis.raft.client.*` and `datanode.ratis.raft.server.*` instead of 
`datanode.ratis.server.raft.server`? Is there any use case where we need to 
configure the client configs on the server side? 
(`datanode.ratis.server.raft.client.*` for example?)
   
   If we already have a grouping on the ratis side why do we introduce an other 
one on the ozone side?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #544: HDDS-2999. Move server-related shared utilities from common to framework

2020-02-11 Thread GitBox
elek commented on issue #544: HDDS-2999. Move server-related shared utilities 
from common to framework
URL: https://github.com/apache/hadoop-ozone/pull/544#issuecomment-584660106
 
 
   @anuengineer Next step forward to a Hadoop independent common. Please review 
if you have time (no logic has been changed just moving methods and classes 
between locations...) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #544: HDDS-2999. Move server-related shared utilities from common to framework

2020-02-11 Thread GitBox
elek opened a new pull request #544: HDDS-2999. Move server-related shared 
utilities from common to framework
URL: https://github.com/apache/hadoop-ozone/pull/544
 
 
   ## What changes were proposed in this pull request?
   
   hdds-common project is shared between all the client and server projects. 
hdds-server-framework project is shared between all the server side services.
   
   To reduce unnecessary dependencies (to Hadoop, for example) we can move all 
the server-side related classes (eg. rocksdb layer, certificate tools) to the 
framework from the common.
   
   We don't need the rocksdb utilities and certificate tools on the client side.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2999
   
   ## How was this patch tested?
   
   With maven build. This is only a compile level restructure of the code


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2999) Move server-related shared utilities from common to framework

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2999:
-
Labels: pull-request-available  (was: )

> Move server-related shared utilities from common to framework
> -
>
> Key: HDDS-2999
> URL: https://issues.apache.org/jira/browse/HDDS-2999
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> hdds-common project is shared between all the client and server projects. 
> hdds-server-framework project is shared between all the server side services.
> To reduce unnecessary dependencies (to Hadoop, for example) we can move all 
> the server-side related classes (eg. rocksdb layer, certificate tools) to the 
> framework from the common.
> We don't need the rocksdb utilities and certificate tools on the client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2999) Move server-related shared utilities from common to framework

2020-02-11 Thread Marton Elek (Jira)
Marton Elek created HDDS-2999:
-

 Summary: Move server-related shared utilities from common to 
framework
 Key: HDDS-2999
 URL: https://issues.apache.org/jira/browse/HDDS-2999
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek


hdds-common project is shared between all the client and server projects. 
hdds-server-framework project is shared between all the server side services.

To reduce unnecessary dependencies (to Hadoop, for example) we can move all the 
server-side related classes (eg. rocksdb layer, certificate tools) to the 
framework from the common.

We don't need the rocksdb utilities and certificate tools on the client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2999) Move server-related shared utilities from common to framework

2020-02-11 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek reassigned HDDS-2999:
-

Assignee: Marton Elek

> Move server-related shared utilities from common to framework
> -
>
> Key: HDDS-2999
> URL: https://issues.apache.org/jira/browse/HDDS-2999
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>
> hdds-common project is shared between all the client and server projects. 
> hdds-server-framework project is shared between all the server side services.
> To reduce unnecessary dependencies (to Hadoop, for example) we can move all 
> the server-side related classes (eg. rocksdb layer, certificate tools) to the 
> framework from the common.
> We don't need the rocksdb utilities and certificate tools on the client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #543: HDDS-2997. Support github comment based commands

2020-02-11 Thread GitBox
adoroszlai commented on a change in pull request #543: HDDS-2997. Support 
github comment based commands
URL: https://github.com/apache/hadoop-ozone/pull/543#discussion_r377630247
 
 

 ##
 File path: .github/comment-commands/pending.sh
 ##
 @@ -0,0 +1,31 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#doc: Add a REQUESTED_CHANGE type review to mark issue non-mergeable: 
`/pending `
+# shellcheck disable=SC2124
+MESSAGE="Marking this issue as un-mergeable as requested.
+
+Please use \`/ready\` comment when it's resolved.
+
+> $@"
+
+URL="$(jq -r '.issue.pull_request.url' "$GITHUB_EVENT_PATH")/reviews"
+set +x #GITHUB_TOKEN
 
 Review comment:
   Is `set +x` needed for safety in the other scripts, or is this one special?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #543: HDDS-2997. Support github comment based commands

2020-02-11 Thread GitBox
adoroszlai commented on a change in pull request #543: HDDS-2997. Support 
github comment based commands
URL: https://github.com/apache/hadoop-ozone/pull/543#discussion_r377612873
 
 

 ##
 File path: .github/comment-commands/help.sh
 ##
 @@ -0,0 +1,27 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#doc: Show all the available comment commands
+DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
+echo "Available commands:"
+DOCTAG="#"
+DOCTAG="${DOCTAG}doc"
+for command in "$DIR"/*.sh; do
+  COMMAND_NAME="$(basename "$command" | sed 's/.sh//g')"
 
 Review comment:
   I think the regex can be a bit more strict (to allow names like `flush` or 
`push`, just in case):
   
   ```suggestion
 COMMAND_NAME="$(basename "$command" | sed 's/\.sh$//')"
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #543: HDDS-2997. Support github comment based commands

2020-02-11 Thread GitBox
adoroszlai commented on a change in pull request #543: HDDS-2997. Support 
github comment based commands
URL: https://github.com/apache/hadoop-ozone/pull/543#discussion_r377630026
 
 

 ##
 File path: .github/comment-commands/retest.sh
 ##
 @@ -0,0 +1,30 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#doc: add new empty commit to trigger new CI build
+
+PR_URL=$(jq -r '.issue.pull_request.url' "$GITHUB_EVENT_PATH")
+read -r REPO_URL BRANCH <<<"$(curl "$PR_URL" | jq -r '.head.repo.ssh_url + " " 
+ .head.ref')"
 
 Review comment:
   I tried `/retest` in the PR in your repo, but it 
[failed](https://github.com/elek/hadoop-ozone/pull/9#issuecomment-584615406).  
Just guessing: maybe `-s` parameter for `curl` is missing and the extra output 
trips up `jq`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2997) Support github comment based commands

2020-02-11 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2997:
--

Assignee: Marton Elek

> Support github comment based commands
> -
>
> Key: HDDS-2997
> URL: https://issues.apache.org/jira/browse/HDDS-2997
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Before we started to use github actions we had the opportunity to use some 
> "commands" in github commants. For example when a `/label xxx` comment has 
> been added to a PR, a bot added the label (by default just the committers can 
> use labels, but with this approach it was possible for everyone).
>  
> Since the move to use github actions I got multiple question about 
> re-triggering the test. Even it it's possible to do with pushing an empty 
> commit (only by the owner or committer) I think it would be better to restore 
> the support of comment commands.
>  
> This patch follows a very simple approach. The available commands are store 
> in a separated subdirectory as shell scripts and they are called by a 
> lightweight wrapper.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #543: HDDS-2997. Support github comment based commands

2020-02-11 Thread GitBox
elek opened a new pull request #543: HDDS-2997. Support github comment based 
commands
URL: https://github.com/apache/hadoop-ozone/pull/543
 
 
   ## What changes were proposed in this pull request?
   
   Before we started to use github actions we had the opportunity to use some 
"commands" in github comments. For example when a `/label xxx` comment has been 
added to a PR, a bot added the label (by default just the committers can use 
labels, but with this approach it was possible for everyone).
   
   Since the move to use github actions I got multiple question about 
re-triggering the test. Even it it's possible to do with pushing an empty 
commit (only by the owner or committer) I think it would be better to restore 
the support of comment commands.
   
   This patch follows a very simple approach. The available commands are store 
in a separated subdirectory as shell scripts and they are called by a 
lightweight wrapper.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2997
   
   ## How was this patch tested?
   
   As it works only if merged to the master I tested it in my own fork:
   
   Feel free to use this example PR to test it:
   
   https://github.com/elek/hadoop-ozone/pull/9


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2997) Support github comment based commands

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2997:
-
Labels: pull-request-available  (was: )

> Support github comment based commands
> -
>
> Key: HDDS-2997
> URL: https://issues.apache.org/jira/browse/HDDS-2997
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> Before we started to use github actions we had the opportunity to use some 
> "commands" in github commants. For example when a `/label xxx` comment has 
> been added to a PR, a bot added the label (by default just the committers can 
> use labels, but with this approach it was possible for everyone).
>  
> Since the move to use github actions I got multiple question about 
> re-triggering the test. Even it it's possible to do with pushing an empty 
> commit (only by the owner or committer) I think it would be better to restore 
> the support of comment commands.
>  
> This patch follows a very simple approach. The available commands are store 
> in a separated subdirectory as shell scripts and they are called by a 
> lightweight wrapper.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2031) Choose datanode for pipeline creation based on network topology

2020-02-11 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034379#comment-17034379
 ] 

Stephen O'Donnell commented on HDDS-2031:
-

[~Sammi] I have been looking into the Network Topology feature to evaluate if 
it is completed. Is this change intended to go into the RatisPipelineProvider, 
as currently it does not look as though it considers NetworkTopology when 
assigning new pipelines. The code to assign a pipeline, I think, is:

{code}
List dns =
nodeManager.getNodes(NodeState.HEALTHY)
.parallelStream()
.filter(dn -> !dnsUsed.contains(dn))
.limit(factor.getNumber())
.collect(Collectors.toList());
{code}

As you can see it simply picks unused nodes at random. Please correct my 
understanding if I have got this wrong.

If this is a key part of the implementation, are you planning to work on it 
soon?

> Choose datanode for pipeline creation based on network topology
> ---
>
> Key: HDDS-2031
> URL: https://issues.apache.org/jira/browse/HDDS-2031
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>
> There are regular heartbeats between datanodes in a pipeline. Choose 
> datanodes based on network topology, to guarantee data reliability and reduce 
> heartbeat network traffic latency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #449: HDDS-2891. Apache NiFi PutFile processor is failing with secure Ozone S3G

2020-02-11 Thread GitBox
elek commented on issue #449: HDDS-2891. Apache NiFi PutFile processor is 
failing with secure Ozone S3G
URL: https://github.com/apache/hadoop-ozone/pull/449#issuecomment-584598595
 
 
   @bharatviswa504 @arp7 Any more comments? Can we commit it, please?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2974) Create Freon tests to test isolated Ratis components

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2974:
-
Labels: pull-request-available  (was: )

> Create Freon tests to test isolated Ratis components
> 
>
> Key: HDDS-2974
> URL: https://issues.apache.org/jira/browse/HDDS-2974
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> We have a specific Freon (=load generator) test to test one specific, 
> working, Ratis pipeline (3 datanodes): DatanodeChunkGenerator
> It sends Ozone GRPC requests to the datanode with WriteChunk requests.
> But we have no easy way to test just *one* the *running* Ratis component.
> In this Jira I propose to create two tests.
> *FollowerAppendLogEntryGenerator*
> If Datanode can be started without registration it contains an empty Ratis 
> server instance.
> Freon test can initialize a new group and request vote with behaving like a 
> real LEADER. It can send real appendLogEntry requests (the Ratis HB messages) 
> with GRPC
> With this approach we can force the datanode to be a FOLLOWER and analyze and 
> profile the behaviour.
> *LeaderAppendLogEntryGenerator (Will be added in a next JIRA)*
> This is the opposite side, we need fake followers to test one leader. It 
> requires to patch Ratis to immediately return with a fake answer instead of 
> sending it to the followers.
> If this patch is in place, we can start the freon test, which can configure 
> the group in the leader and send ratis client messages to the leader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #542: HDDS-2974. Create Freon tests to test isolated Ratis components

2020-02-11 Thread GitBox
elek opened a new pull request #542: HDDS-2974. Create Freon tests to test 
isolated Ratis components
URL: https://github.com/apache/hadoop-ozone/pull/542
 
 
   ## What changes were proposed in this pull request?
   
   We have a specific Freon (=load generator) test to test one specific working 
Ratis pipeline (3 datanodes): `DatanodeChunkGenerator`
   
   It sends Ozone GRPC requests to the datanode with `WriteChunk` requests.
   
   But we have no easy way to test just one the running Ratis component.
   
   In this Jira I propose to create test for isolated ratis components. There 
two possible components: follower and leader. In this patch I introduce a 
follower load tester
   
   FollowerAppendLogEntryGenerator
   
   When Datanode is started without registration it contains an empty Ratis 
server instance.
   
   Freon test can initialize a new group and request vote with behaving like a 
real LEADER. It can send real appendLogEntry requests (the Ratis HB messages) 
with GRPC
   
   With this approach we can force the datanode to be a FOLLOWER and analyze 
and profile the behavior.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2974
   
   ## How was this patch tested?
   
   Tested in Kubernetes with [multiple, separated load 
tests](https://hackmd.io/fzbXUQ-xTTmnGDL7fufZrA), but it also can be tested 
locally:
   
   **1.** Modify the configuration of the simple ozone cluster definition at 
`compose/ozone`. Add the following line to the `docker-config`:
   
   ```
   OZONE_DATANODE_STANDALONE_TEST=follower
   OZONE-SITE.XML_dfs.ratis.leader.election.minimum.timeout.duration=2d
   ```
   
   **2.** Start one single datanode:
   
   ```
   docker-compose down
   docker-compose up -d datanode
   ```
   **3.** Find the UUID generated for the datanode:
   
   ```
   docker-compose logs datanode | grep "GrpcService started"
   datanode_1  | 2020-02-11 10:32:53,549 [main] INFO server.GrpcService: 
5ddf8cba-618f-4af8-ba3e-8c2c48b01ae7: GrpcService started, listening on 
0.0.0.0/0.0.0.0:9858
   ```
   
   In this case the UUID is: `5ddf8cba-618f-4af8-ba3e-8c2c48b01ae7`
   
   **4.** Start freon test with the UUID
   
   ```
   docker-compose exec datanode ozone freon falg --thread=1 --batching=1 
--size=1024 --number-of-tests=1000  -r 5ddf8cba-618f-4af8-ba3e-8c2c48b01ae7
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2974) Create Freon tests to test isolated Ratis components

2020-02-11 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-2974:
--
Description: 
We have a specific Freon (=load generator) test to test one specific, working, 
Ratis pipeline (3 datanodes): DatanodeChunkGenerator

It sends Ozone GRPC requests to the datanode with WriteChunk requests.

But we have no easy way to test just *one* the *running* Ratis component.

In this Jira I propose to create two tests.

*FollowerAppendLogEntryGenerator*

If Datanode can be started without registration it contains an empty Ratis 
server instance.

Freon test can initialize a new group and request vote with behaving like a 
real LEADER. It can send real appendLogEntry requests (the Ratis HB messages) 
with GRPC

With this approach we can force the datanode to be a FOLLOWER and analyze and 
profile the behaviour.

*LeaderAppendLogEntryGenerator (Will be added in a next JIRA)*

This is the opposite side, we need fake followers to test one leader. It 
requires to patch Ratis to immediately return with a fake answer instead of 
sending it to the followers.

If this patch is in place, we can start the freon test, which can configure the 
group in the leader and send ratis client messages to the leader.

  was:
We have a specific Freon (=load generator) test to test one specific, working, 
Ratis pipeline (3 datanodes): DatanodeChunkGenerator

It sends Ozone GRPC requests to the datanode with WriteChunk requests.

But we have no easy way to test just *one* the *running* Ratis component.

In this Jira I propose to create two tests.

*FollowerAppendLogEntryGenerator*

If Datanode can be started without registration it contains an empty Ratis 
server instance.

Freon test can initialize a new group and request vote with behaving like a 
real LEADER. It can send real appendLogEntry requests (the Ratis HB messages) 
with GRPC

With this approach we can force the datanode to be a FOLLOWER and analyze and 
profile the behaviour.

*LeaderAppendLogEntryGenerator*

This is the opposite side, we need fake followers to test one leader. It 
requires to patch Ratis to immediately return with a fake answer instead of 
sending it to the followers.

If this patch is in place, we can start the freon test, which can configure the 
group in the leader and send ratis client messages to the leader.


> Create Freon tests to test isolated Ratis components
> 
>
> Key: HDDS-2974
> URL: https://issues.apache.org/jira/browse/HDDS-2974
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Priority: Major
>
> We have a specific Freon (=load generator) test to test one specific, 
> working, Ratis pipeline (3 datanodes): DatanodeChunkGenerator
> It sends Ozone GRPC requests to the datanode with WriteChunk requests.
> But we have no easy way to test just *one* the *running* Ratis component.
> In this Jira I propose to create two tests.
> *FollowerAppendLogEntryGenerator*
> If Datanode can be started without registration it contains an empty Ratis 
> server instance.
> Freon test can initialize a new group and request vote with behaving like a 
> real LEADER. It can send real appendLogEntry requests (the Ratis HB messages) 
> with GRPC
> With this approach we can force the datanode to be a FOLLOWER and analyze and 
> profile the behaviour.
> *LeaderAppendLogEntryGenerator (Will be added in a next JIRA)*
> This is the opposite side, we need fake followers to test one leader. It 
> requires to patch Ratis to immediately return with a fake answer instead of 
> sending it to the followers.
> If this patch is in place, we can start the freon test, which can configure 
> the group in the leader and send ratis client messages to the leader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] supratimdeka commented on a change in pull request #498: HDDS-2940. mkdir : create key table entries for intermediate directories in the path

2020-02-11 Thread GitBox
supratimdeka commented on a change in pull request #498: HDDS-2940. mkdir : 
create key table entries for intermediate directories in the path
URL: https://github.com/apache/hadoop-ozone/pull/498#discussion_r377561961
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
 ##
 @@ -159,8 +172,24 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 FILE_ALREADY_EXISTS);
   } else if (omDirectoryResult == DIRECTORY_EXISTS_IN_GIVENPATH ||
   omDirectoryResult == NONE) {
-dirKeyInfo = createDirectoryKeyInfo(ozoneManager, omBucketInfo,
-volumeName, bucketName, keyName, keyArgs, transactionLogIndex);
+dirKeyInfo = createDirectoryKeyInfo(ozoneManager, keyName, keyArgs,
+baseObjId, transactionLogIndex);
+
+for (String missingKey : missingParents) {
+  LOG.debug("missing parent {}", missingKey);
+  // what about keyArgs for parent directories? TODO
+  OmKeyInfo parentKeyInfo = createDirectoryKeyInfoNoACL(ozoneManager,
+  missingKey, keyArgs, baseObjId + objectCount,
+  transactionLogIndex);
 
 Review comment:
   sure, will use the same object id scheme for volume and bucket.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #541: Pr test2

2020-02-11 Thread GitBox
elek commented on issue #541: Pr test2
URL: https://github.com/apache/hadoop-ozone/pull/541#issuecomment-584545980
 
 
   wrong repo


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek closed pull request #541: Pr test2

2020-02-11 Thread GitBox
elek closed pull request #541: Pr test2
URL: https://github.com/apache/hadoop-ozone/pull/541
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #541: Pr test2

2020-02-11 Thread GitBox
elek opened a new pull request #541: Pr test2
URL: https://github.com/apache/hadoop-ozone/pull/541
 
 
   ## What changes were proposed in this pull request?
   
   (Please fill in changes proposed in this fix)
   
   ## What is the link to the Apache JIRA
   
   (Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HDDS-. Fix a typo in YYY.)
   
   Please replace this section with the link to the Apache JIRA)
   
   ## How was this patch tested?
   
   (Please explain how this patch was tested. Ex: unit tests, manual tests)
   (If this patch involves UI changes, please attach a screen-shot; otherwise, 
remove this)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2998) Improve test coverage of audit logging

2020-02-11 Thread Istvan Fajth (Jira)
Istvan Fajth created HDDS-2998:
--

 Summary: Improve test coverage of audit logging
 Key: HDDS-2998
 URL: https://issues.apache.org/jira/browse/HDDS-2998
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Istvan Fajth


Review audit logging tests, and add assertions about the different audit log 
contents we expect to have in the audit log.
A good place to start with is TestOMKeyRequest where we create an audit logger 
mock, via that one most likely the assertions can be done for all the requests.

This is a follow up on HDDS-2946.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2998) Improve test coverage of audit logging

2020-02-11 Thread Istvan Fajth (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth reassigned HDDS-2998:
--

Assignee: Istvan Fajth

> Improve test coverage of audit logging
> --
>
> Key: HDDS-2998
> URL: https://issues.apache.org/jira/browse/HDDS-2998
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: newbie++
>
> Review audit logging tests, and add assertions about the different audit log 
> contents we expect to have in the audit log.
> A good place to start with is TestOMKeyRequest where we create an audit 
> logger mock, via that one most likely the assertions can be done for all the 
> requests.
> This is a follow up on HDDS-2946.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2443) Python client/interface for Ozone

2020-02-11 Thread mingchao zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034270#comment-17034270
 ] 

mingchao zhao commented on HDDS-2443:
-

[^pyarrow_ozone_test.docx]   Here are the results of the pyarrow read-write test

> Python client/interface for Ozone
> -
>
> Key: HDDS-2443
> URL: https://issues.apache.org/jira/browse/HDDS-2443
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client
>Reporter: Li Cheng
>Priority: Major
> Attachments: Ozone with pyarrow.html, OzoneS3.py, 
> pyarrow_ozone_test.docx
>
>
> This Jira will be used to track development for python client/interface of 
> Ozone.
> Original ideas: item#25 in 
> [https://cwiki.apache.org/confluence/display/HADOOP/Ozone+project+ideas+for+new+contributors]
> Ozone Client(Python) for Data Science Notebook such as Jupyter.
>  # Size: Large
>  # PyArrow: [https://pypi.org/project/pyarrow/]
>  # Python -> libhdfs HDFS JNI library (HDFS, S3,...) -> Java client API 
> Impala uses  libhdfs
> Path to try:
>  # s3 interface: Ozone s3 gateway(already supported) + AWS python client 
> (boto3)
>  # python native RPC
>  # pyarrow + libhdfs, which use the Java client under the hood.
>  # python + C interface of go / rust ozone library. I created POC go / rust 
> clients earlier which can be improved if the libhdfs interface is not good 
> enough. [By [~elek]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2997) Support github comment based commands

2020-02-11 Thread Marton Elek (Jira)
Marton Elek created HDDS-2997:
-

 Summary: Support github comment based commands
 Key: HDDS-2997
 URL: https://issues.apache.org/jira/browse/HDDS-2997
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek


Before we started to use github actions we had the opportunity to use some 
"commands" in github commants. For example when a `/label xxx` comment has been 
added to a PR, a bot added the label (by default just the committers can use 
labels, but with this approach it was possible for everyone).

 

Since the move to use github actions I got multiple question about 
re-triggering the test. Even it it's possible to do with pushing an empty 
commit (only by the owner or committer) I think it would be better to restore 
the support of comment commands.

 

This patch follows a very simple approach. The available commands are store in 
a separated subdirectory as shell scripts and they are called by a lightweight 
wrapper.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org