[jira] [Commented] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-01 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941872#comment-16941872
 ] 

Elek, Marton commented on HDDS-2217:


When I checked last time I found that the audit logs of the docker-config files 
are not used at all as we use predefined file names from the etc/hadoop (I am 
not sure if I remember well, but this is my guess...)

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Elek, Marton
>Priority: Major
>  Labels: newbe
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2073) Make SCMSecurityProtocol message based

2019-09-25 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2073:
---
Status: Patch Available  (was: Open)

> Make SCMSecurityProtocol message based
> --
>
> Key: HDDS-2073
> URL: https://issues.apache.org/jira/browse/HDDS-2073
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> SCMSecurityProtocol.proto is not yet migrated to this approach. To make our 
> generic debug tool more powerful and unify our protocols I suggest to 
> transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2178) Support Ozone insight tool in secure cluster

2019-09-25 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2178:
--

 Summary: Support Ozone insight tool in secure cluster
 Key: HDDS-2178
 URL: https://issues.apache.org/jira/browse/HDDS-2178
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton


SPNEGO should initialized properly for the HTTP requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2073) Make SCMSecurityProtocol message based

2019-09-25 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2073:
--

Assignee: Elek, Marton

> Make SCMSecurityProtocol message based
> --
>
> Key: HDDS-2073
> URL: https://issues.apache.org/jira/browse/HDDS-2073
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> SCMSecurityProtocol.proto is not yet migrated to this approach. To make our 
> generic debug tool more powerful and unify our protocols I suggest to 
> transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2165) Freon fails if bucket does not exists

2019-09-25 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2165:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Freon fails if bucket does not exists
> -
>
> Key: HDDS-2165
> URL: https://issues.apache.org/jira/browse/HDDS-2165
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:title=ozone freon ockg}
> Bucket not found
> ...
> Failures: 0
> Successful executions: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1738) Add nullable annotation for OMResponse classes

2019-09-25 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1738:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add nullable annotation for OMResponse classes
> --
>
> Key: HDDS-1738
> URL: https://issues.apache.org/jira/browse/HDDS-1738
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is to address [~arp] comment.
> A future improvement unrelated to your patch - replace null with Optional. Or 
> at least add [@nullable|https://github.com/nullable] annotation on the 
> parameter.
> Add @nullable for all OMResponse fields, where fields can be null.
>  
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2171) Dangling links in test report due to incompatible realpath

2019-09-25 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2171:
---
   Fix Version/s: 0.5.0
Target Version/s: 0.5.0
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> Dangling links in test report due to incompatible realpath
> --
>
> Key: HDDS-2171
> URL: https://issues.apache.org/jira/browse/HDDS-2171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Test summaries point to wrong locations, eg.:
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-20190924-mj2km/integration/summary.md}
>  * 
> [org.apache.hadoop.ozone.scm.node.TestQueryNode](/tmp/log/trunk/trunk-nightly-20190924-mj2km/integration/workdir/hadoop-ozone/integration-test/org.apache.hadoop.ozone.scm.node.TestQueryNode.txt)
>  
> ([output](/tmp/log/trunk/trunk-nightly-20190924-mj2km/integration/workdir/hadoop-ozone/integration-test/org.apache.hadoop.ozone.scm.node.TestQueryNode-output.txt/))
> {code}
> shouldn't include {{/workdir}}, nor {{/tmp/log/}}.
> The root cause is that Busybox {{realpath}} does not accept options, rather 
> returns absolute path:
> {code:title=elek/ozone-build:20190825-1}
> $ cd /etc
> $ realpath --relative-to=$(pwd) motd
> realpath: --relative-to=/etc: No such file or directory
> /etc/motd
> {code}
> It worked previously because the docker image 
> [was|https://github.com/elek/argo-ozone/commit/bad4b6747fa06c227dfcbff1f098f8d9c8179b79]
>  based on a more complete Linux.
> {code:title=elek/ozone-build:test}
> $ cd /etc
> $ realpath --relative-to=$(pwd) motd
> motd
> {code}
> CC [~elek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-09-25 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2072:
--

Assignee: Elek, Marton

> Make StorageContainerLocationProtocolService message based
> --
>
> Key: HDDS-2072
> URL: https://issues.apache.org/jira/browse/HDDS-2072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerLocationProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2167) Hadoop31-mr acceptance test is failing due to the shading

2019-09-24 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2167:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Hadoop31-mr acceptance test is failing due to the shading
> -
>
> Key: HDDS-2167
> URL: https://issues.apache.org/jira/browse/HDDS-2167
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> From the daily build:
> {code}
>   Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/ozone/shaded/org/apache/http/client/utils/URIBuilder
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:138)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:195)
>   at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:259)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.ozone.shaded.org.apache.http.client.utils.URIBuilder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}
> It can be reproduced locally with executing the tests:
> {code}
> cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-mr/hadoop31
> ./test.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-09-24 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2072:
---
Status: Patch Available  (was: Open)

> Make StorageContainerLocationProtocolService message based
> --
>
> Key: HDDS-2072
> URL: https://issues.apache.org/jira/browse/HDDS-2072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerLocationProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2167) Hadoop31-mr acceptance test is failing due to the shading

2019-09-23 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935982#comment-16935982
 ] 

Elek, Marton commented on HDDS-2167:


httpclient was excluded together with hadoop dependencies but we explicit use 
it from BasicOzoneFileSystem. It's better to add it as a dependency because in 
this case it will be properly shaded to the current jar.

 

> Hadoop31-mr acceptance test is failing due to the shading
> -
>
> Key: HDDS-2167
> URL: https://issues.apache.org/jira/browse/HDDS-2167
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From the daily build:
> {code}
>   Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/ozone/shaded/org/apache/http/client/utils/URIBuilder
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:138)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:195)
>   at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:259)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.ozone.shaded.org.apache.http.client.utils.URIBuilder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}
> It can be reproduced locally with executing the tests:
> {code}
> cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-mr/hadoop31
> ./test.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2167) Hadoop31-mr acceptance test is failing due to the shading

2019-09-23 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2167:
---
Status: Patch Available  (was: Open)

> Hadoop31-mr acceptance test is failing due to the shading
> -
>
> Key: HDDS-2167
> URL: https://issues.apache.org/jira/browse/HDDS-2167
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From the daily build:
> {code}
>   Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/ozone/shaded/org/apache/http/client/utils/URIBuilder
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:138)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:195)
>   at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:259)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.ozone.shaded.org.apache.http.client.utils.URIBuilder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}
> It can be reproduced locally with executing the tests:
> {code}
> cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-mr/hadoop31
> ./test.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2068) Make StorageContainerDatanodeProtocolService message based

2019-09-23 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2068:
--

Assignee: Elek, Marton

> Make StorageContainerDatanodeProtocolService message based
> --
>
> Key: HDDS-2068
> URL: https://issues.apache.org/jira/browse/HDDS-2068
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerDatanodeProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2167) Hadoop31-mr acceptance test is failing due to the shading

2019-09-23 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935950#comment-16935950
 ] 

Elek, Marton commented on HDDS-2167:


/cc [~arp] + [~bharat]

> Hadoop31-mr acceptance test is failing due to the shading
> -
>
> Key: HDDS-2167
> URL: https://issues.apache.org/jira/browse/HDDS-2167
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> From the daily build:
> {code}
>   Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/ozone/shaded/org/apache/http/client/utils/URIBuilder
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:138)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:195)
>   at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:259)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.ozone.shaded.org.apache.http.client.utils.URIBuilder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}
> It can be reproduced locally with executing the tests:
> {code}
> cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-mr/hadoop31
> ./test.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2167) Hadoop31-mr acceptance test is failing due to the shading

2019-09-23 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2167:
--

 Summary: Hadoop31-mr acceptance test is failing due to the shading
 Key: HDDS-2167
 URL: https://issues.apache.org/jira/browse/HDDS-2167
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


>From the daily build:

{code}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/ozone/shaded/org/apache/http/client/utils/URIBuilder
at 
org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:138)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:195)
at 
org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:259)
at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.ozone.shaded.org.apache.http.client.utils.URIBuilder
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 15 more
{code}

It can be reproduced locally with executing the tests:

{code}
cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-mr/hadoop31
./test.sh
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2166) Some RPC metrics are missing from SCM prometheus endpoint

2019-09-23 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2166:
---
Status: Patch Available  (was: Open)

> Some RPC metrics are missing from SCM prometheus endpoint
> -
>
> Key: HDDS-2166
> URL: https://issues.apache.org/jira/browse/HDDS-2166
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In Hadoop metrics it's possible to register multiple metrics with the same 
> name but with different tags. For example each RpcServere has an own metrics 
> instance in SCM.
> {code}
> "name" : 
> "Hadoop:service=StorageContainerManager,name=RpcActivityForPort9860",
> "name" : 
> "Hadoop:service=StorageContainerManager,name=RpcActivityForPort9863",
> {code}
> They are converted by PrometheusSink to a prometheus metric line with proper 
> name and tags. For example:
> {code}
> rpc_rpc_queue_time60s_num_ops{port="9860",servername="StorageContainerLocationProtocolService",context="rpc",hostname="72736061cbc5"}
>  0
> {code}
> The PrometheusSink uses a Map to cache all the recent values but 
> unfortunately the key contains only the name (rpc_rpc_queue_time60s_num_ops 
> in our example) but not the tags (port=...)
> For this reason if there are multiple metrics with the same name, only the 
> first one will be displayed.
> As a result in SCM only the metrics of the first RPC server can be exported 
> to the prometheus endpoint. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2068) Make StorageContainerDatanodeProtocolService message based

2019-09-23 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2068:
---
Status: Patch Available  (was: Open)

> Make StorageContainerDatanodeProtocolService message based
> --
>
> Key: HDDS-2068
> URL: https://issues.apache.org/jira/browse/HDDS-2068
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerDatanodeProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2166) Some RPC metrics are missing from SCM prometheus endpoint

2019-09-23 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2166:
--

 Summary: Some RPC metrics are missing from SCM prometheus endpoint
 Key: HDDS-2166
 URL: https://issues.apache.org/jira/browse/HDDS-2166
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


In Hadoop metrics it's possible to register multiple metrics with the same name 
but with different tags. For example each RpcServere has an own metrics 
instance in SCM.

{code}
"name" : 
"Hadoop:service=StorageContainerManager,name=RpcActivityForPort9860",
"name" : 
"Hadoop:service=StorageContainerManager,name=RpcActivityForPort9863",
{code}

They are converted by PrometheusSink to a prometheus metric line with proper 
name and tags. For example:

{code}
rpc_rpc_queue_time60s_num_ops{port="9860",servername="StorageContainerLocationProtocolService",context="rpc",hostname="72736061cbc5"}
 0
{code}

The PrometheusSink uses a Map to cache all the recent values but unfortunately 
the key contains only the name (rpc_rpc_queue_time60s_num_ops in our example) 
but not the tags (port=...)

For this reason if there are multiple metrics with the same name, only the 
first one will be displayed.

As a result in SCM only the metrics of the first RPC server can be exported to 
the prometheus endpoint. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2067) Create generic service facade with tracing/metrics/logging support

2019-09-23 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2067:
--

Assignee: Elek, Marton

> Create generic service facade with tracing/metrics/logging support
> --
>
> Key: HDDS-2067
> URL: https://issues.apache.org/jira/browse/HDDS-2067
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We started to use a message based GRPC approach. Wen have only one method and 
> the requests are routed based on a "type" field in the proto message. 
> For example in OM protocol:
> {code}
> /**
>  The OM service that takes care of Ozone namespace.
> */
> service OzoneManagerService {
> // A client-to-OM RPC to send client requests to OM Ratis server
> rpc submitRequest(OMRequest)
>   returns(OMResponse);
> }
> {code}
> And 
> {code}
> message OMRequest {
>   required Type cmdType = 1; // Type of the command
> ...
> {code}
> This approach makes it possible to use the same code to process incoming 
> messages in the server side.
> ScmBlockLocationProtocolServerSideTranslatorPB.send method contains the logic 
> of:
>  * Logging the request/response message (can be displayed with ozone insight)
>  * Updated metrics
>  * Handle open tracing context propagation.
> These functions are generic. For example 
> OzoneManagerProtocolServerSideTranslatorPB use the same (=similar) code.
> The goal in this jira is to provide a generic utility and move the common 
> code for tracing/request logging/response logging/metrics calculation to a 
> common utility which can be used from all the ServerSide translators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2067) Create generic service facade with tracing/metrics/logging support

2019-09-23 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2067:
---
Status: Patch Available  (was: Open)

> Create generic service facade with tracing/metrics/logging support
> --
>
> Key: HDDS-2067
> URL: https://issues.apache.org/jira/browse/HDDS-2067
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We started to use a message based GRPC approach. Wen have only one method and 
> the requests are routed based on a "type" field in the proto message. 
> For example in OM protocol:
> {code}
> /**
>  The OM service that takes care of Ozone namespace.
> */
> service OzoneManagerService {
> // A client-to-OM RPC to send client requests to OM Ratis server
> rpc submitRequest(OMRequest)
>   returns(OMResponse);
> }
> {code}
> And 
> {code}
> message OMRequest {
>   required Type cmdType = 1; // Type of the command
> ...
> {code}
> This approach makes it possible to use the same code to process incoming 
> messages in the server side.
> ScmBlockLocationProtocolServerSideTranslatorPB.send method contains the logic 
> of:
>  * Logging the request/response message (can be displayed with ozone insight)
>  * Updated metrics
>  * Handle open tracing context propagation.
> These functions are generic. For example 
> OzoneManagerProtocolServerSideTranslatorPB use the same (=similar) code.
> The goal in this jira is to provide a generic utility and move the common 
> code for tracing/request logging/response logging/metrics calculation to a 
> common utility which can be used from all the ServerSide translators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2043) "VOLUME_NOT_FOUND" exception thrown while listing volumes

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton resolved HDDS-2043.

Resolution: Duplicate

Tested and worked well. HDDS-1926 fixed the same problem IMHO.

> "VOLUME_NOT_FOUND" exception thrown while listing volumes
> -
>
> Key: HDDS-2043
> URL: https://issues.apache.org/jira/browse/HDDS-2043
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI, Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> ozone list volume command throws OMException
> bin/ozone sh volume list --user root
>  VOLUME_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Volume 
> info not found for vol-test-putfile-1566902803
>  
> On enabling DEBUG log , here is the console output :
>  
>  
> {noformat}
> bin/ozone sh volume create /n1 ; echo $?
> 2019-08-27 11:47:16 DEBUG ThriftSenderFactory:33 - Using the UDP Sender to 
> send spans to the agent.
> 2019-08-27 11:47:16 DEBUG SenderResolver:86 - Using sender UdpSender()
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, 
> always=false, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate 
> of successful kerberos logins and latency (milliseconds)])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, 
> always=false, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate 
> of failed kerberos logins and latency (milliseconds)])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, 
> always=false, valueName=Time, about=, interval=10, type=DEFAULT, 
> value=[GetGroups])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field private 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal
>  with annotation 
> @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, 
> valueName=Time, about=, interval=10, type=DEFAULT, value=[Renewal failures 
> since startup])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field private 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures 
> with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, 
> always=false, valueName=Time, about=, interval=10, type=DEFAULT, 
> value=[Renewal failures since last successful login])
> 2019-08-27 11:47:16 DEBUG MetricsSystemImpl:231 - UgiMetrics, User and group 
> related metrics
> 2019-08-27 11:47:16 DEBUG SecurityUtil:124 - Setting 
> hadoop.security.token.service.use_ip to true
> 2019-08-27 11:47:16 DEBUG Shell:821 - setsid exited with exit code 0
> 2019-08-27 11:47:16 DEBUG Groups:449 - Creating new Groups object
> 2019-08-27 11:47:16 DEBUG Groups:151 - Group mapping 
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
> cacheTimeout=30; warningDeltaMs=5000
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:254 - hadoop login
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:187 - hadoop login commit
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:215 - using local 
> user:UnixPrincipal: root
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:221 - Using user: 
> "UnixPrincipal: root" with name root
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:235 - User entry: "root"
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:766 - UGI loginUser:root 
> (auth:SIMPLE)
> 2019-08-27 11:47:16 DEBUG OzoneClientFactory:287 - Using 
> org.apache.hadoop.ozone.client.rpc.RpcClient as client protocol.
> 2019-08-27 11:47:16 DEBUG Server:280 - rpcKind=RPC_PROTOCOL_BUFFER, 
> rpcRequestWrapperClass=class 
> org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, 
> rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@710f4dc7
> 2019-08-27 11:47:16 DEBUG Client:63 - getting client out of cache: 
> org.apache.hadoop.ipc.Client@24313fcc
> 2019-08-27 11:47:16 DEBUG Client:487 - The ping interval is 6 ms.
> 2019-08-27 11:47:16 DEBUG Client:785 - Connecting to 
> nnandi-1.gce.cloudera.com/172.31.117.213:9862
> 2019-08-27 11:47:16 DEBUG Client:1064 - IPC Client (580871917) connection to 
> 

[jira] [Updated] (HDDS-2127) Detailed Tools doc not reachable

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2127:
---
Status: Patch Available  (was: Open)

> Detailed Tools doc not reachable
> 
>
> Key: HDDS-2127
> URL: https://issues.apache.org/jira/browse/HDDS-2127
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are two doc pages for tools:
>  * docs/beyond/tools.html
>  * docs/tools.html
> The latter is more detailed (has subpages for several tools), but it is not 
> reachable (even indirectly) from the start page.  Not sure if this is 
> intentional.
> On a related note, it has two "Testing tools" sub-pages. One of them is empty 
> and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2127) Detailed Tools doc not reachable

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2127:
--

Assignee: Elek, Marton

> Detailed Tools doc not reachable
> 
>
> Key: HDDS-2127
> URL: https://issues.apache.org/jira/browse/HDDS-2127
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Elek, Marton
>Priority: Major
>
> There are two doc pages for tools:
>  * docs/beyond/tools.html
>  * docs/tools.html
> The latter is more detailed (has subpages for several tools), but it is not 
> reachable (even indirectly) from the start page.  Not sure if this is 
> intentional.
> On a related note, it has two "Testing tools" sub-pages. One of them is empty 
> and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2141) Missing total number of operations

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2141:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: missing_total.png, total-new.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2148) Remove redundant code in CreateBucketHandler.java

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2148:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove redundant code in CreateBucketHandler.java
> -
>
> Key: HDDS-2148
> URL: https://issues.apache.org/jira/browse/HDDS-2148
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:java}
> if (isVerbose()) {
>   System.out.printf("Volume Name : %s%n", volumeName);
>   System.out.printf("Bucket Name : %s%n", bucketName);
>   if (bekName != null) {
> bb.setBucketEncryptionKey(bekName);
> System.out.printf("Bucket Encryption enabled with Key Name: %s%n",
> bekName);
>   }
> }
> {code}
> This jira aims to remove the redundant line 
> {{bb.setBucketEncryptionKey(bekName);}} as the same operation is performed in 
> the preceding code block. This code block is to print additional details if 
> verbose option was specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2154:
---
Status: Patch Available  (was: Open)

> Fix Checkstyle issues
> -
>
> Key: HDDS-2154
> URL: https://issues.apache.org/jira/browse/HDDS-2154
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Unfortunately checkstyle checks didn't work well from HDDS-2106 to HDDS-2119. 
> This patch fixes all the issues which are accidentally merged in the mean 
> time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2154:
--

Assignee: Elek, Marton

> Fix Checkstyle issues
> -
>
> Key: HDDS-2154
> URL: https://issues.apache.org/jira/browse/HDDS-2154
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Unfortunately checkstyle checks didn't work well from HDDS-2106 to HDDS-2119. 
> This patch fixes all the issues which are accidentally merged in the mean 
> time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2154:
--

 Summary: Fix Checkstyle issues
 Key: HDDS-2154
 URL: https://issues.apache.org/jira/browse/HDDS-2154
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


Unfortunately checkstyle checks didn't work well from HDDS-2106 to HDDS-2119. 

This patch fixes all the issues which are accidentally merged in the mean time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2119:
---
Fix Version/s: 0.5.0

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2119:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2016) Add option to enforce gdpr in Bucket Create command

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2016:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add option to enforce gdpr in Bucket Create command
> ---
>
> Key: HDDS-2016
> URL: https://issues.apache.org/jira/browse/HDDS-2016
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> e2e flow where user can enforce GDPR for a bucket during creation only.
> Add/update audit logs as this will be a useful action for compliance purpose.
> Add docs to show usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2147) Include dumpstream in test report

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2147:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Include dumpstream in test report
> -
>
> Key: HDDS-2147
> URL: https://issues.apache.org/jira/browse/HDDS-2147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Include {{*.dumpstream}} in the unit test report, which may help finding out 
> the cause of {{Corrupted STDOUT}} warning of forked JVM.
> {noformat:title=https://github.com/elek/ozone-ci/blob/5429d0982c3b13d311ec353dba198f2f5253757c/pr/pr-hdds-2141-4zm8s/unit/output.log#L333-L334}
> [INFO] Running org.apache.hadoop.utils.TestMetadataStore
> [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 
> 1. See FAQ web page and the dump file 
> /workdir/hadoop-hdds/common/target/surefire-reports/2019-09-18T12-58-05_531-jvmRun1.dumpstream
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-730:
--
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
> Attachments: fscmd.png, fswith_nonexsitcmd.png, 
> image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2149) Replace findbugs with spotbugs

2019-09-19 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933118#comment-16933118
 ] 

Elek, Marton commented on HDDS-2149:


The non-Jenkins CI scripts are using 
./hadoop-ozone/dev-support/checks/findbugs.sh.

As far as the shell script can be run, it will work...

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2065) Implement OMNodeDetails#toString

2019-09-18 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2065:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Implement OMNodeDetails#toString
> 
>
> Key: HDDS-2065
> URL: https://issues.apache.org/jira/browse/HDDS-2065
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Wrote this snippet while debugging OM HA. Might be useful for others when 
> they are debugging as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-18 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2143:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Rename classes under package org.apache.hadoop.utils
> 
>
> Key: HDDS-2143
> URL: https://issues.apache.org/jira/browse/HDDS-2143
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Rename classes under package org.apache.hadoop.utils -> 
> org.apache.hadoop.hdds.utils in hadoop-hdds-common
>  
> Now, with current way, we might collide with hadoop classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-18 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2137:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2065) Implement OMNodeDetails#toString

2019-09-18 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2065:
---
Status: Patch Available  (was: Open)

> Implement OMNodeDetails#toString
> 
>
> Key: HDDS-2065
> URL: https://issues.apache.org/jira/browse/HDDS-2065
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Wrote this snippet while debugging OM HA. Might be useful for others when 
> they are debugging as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2138) OM bucket operations do not add up

2019-09-18 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2138:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> OM bucket operations do not add up
> --
>
> Key: HDDS-2138
> URL: https://issues.apache.org/jira/browse/HDDS-2138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: bucket.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Total OM bucket operations may be higher than sum of counts for individual 
> operation type, because S3 bucket operations are displayed in separate charts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2021) Upgrade Guava library to v26 in hdds project

2019-09-18 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2021:
---
Status: Open  (was: Patch Available)

Moving back to in-progress state based on the last PR comment:

bq. I will spend some more time on this to see what is the best way to fix it

> Upgrade Guava library to v26 in hdds project
> 
>
> Key: HDDS-2021
> URL: https://issues.apache.org/jira/browse/HDDS-2021
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Upgrade Guava library to v26 in hdds project



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2021) Upgrade Guava library to v26 in hdds project

2019-09-18 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932360#comment-16932360
 ] 

Elek, Marton edited comment on HDDS-2021 at 9/18/19 12:11 PM:
--

Moving back to Open state based on the last PR comment:

bq. I will spend some more time on this to see what is the best way to fix it


was (Author: elek):
Moving back to in-progress state based on the last PR comment:

bq. I will spend some more time on this to see what is the best way to fix it

> Upgrade Guava library to v26 in hdds project
> 
>
> Key: HDDS-2021
> URL: https://issues.apache.org/jira/browse/HDDS-2021
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Upgrade Guava library to v26 in hdds project



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2134) OM Metrics graphs include empty request type

2019-09-18 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2134:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> OM Metrics graphs include empty request type
> 
>
> Key: HDDS-2134
> URL: https://issues.apache.org/jira/browse/HDDS-2134
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: key_metrics.png, om_metrics.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Ozone Manager Metrics seems to include an odd empty request type "s".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2022) Add additional freon tests

2019-09-18 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2022:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add additional freon tests
> --
>
> Key: HDDS-2022
> URL: https://issues.apache.org/jira/browse/HDDS-2022
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Freon is a generic load generator tool for ozone (ozone freon) which supports 
> multiple generation pattern.
> As of now only the random-key-generator is implemented which uses ozone rpc 
> client.
> It would be great to add additional tests:
>  * Test key generation via s3 interface
>  * Test key generation via the hadoop fs interface
>  * Test key reads (validation)
>  * Test OM with direct RPC calls



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2144) MR job failing on secure Ozone cluster

2019-09-18 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2144:
--

Assignee: Bharat Viswanadham

> MR job failing on secure Ozone cluster
> --
>
> Key: HDDS-2144
> URL: https://issues.apache.org/jira/browse/HDDS-2144
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Failing with below error:
> Caused by: Client cannot authenticate via:[TOKEN, KERBEROS]
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]
> at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
> at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:411)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:804)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:800)
> at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
> at org.apache.hadoop.ipc.Client.call(Client.java:1403)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:332)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1163)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
> at com.sun.proxy.$Proxy80.getServiceList(Unknown Source)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:248)
> at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:167)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:256)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:239)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:203)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:161)
> at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:102)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:155)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 

[jira] [Commented] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931301#comment-16931301
 ] 

Elek, Marton commented on HDDS-730:
---

Thanks @YiSheng Lien the patch. It looks good to me, but it will be available 
only from 3.3.

What do you think about extending the FsShell (OzoneFsShell extends FsShell) 
and override getUsagePrefix and change the main class of ozone sh to the 
OzoneFsShell.

Would it be possible?



> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2135) OM Metric mismatch (MultipartUpload failures)

2019-09-17 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2135:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> OM Metric mismatch (MultipartUpload failures)
> -
>
> Key: HDDS-2135
> URL: https://issues.apache.org/jira/browse/HDDS-2135
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{incNumCommitMultipartUploadPartFails()}} increments 
> {{numInitiateMultipartUploadFails}} instead of the counter for commit 
> failures.
> https://github.com/apache/hadoop/blob/85b1c728e4ed22f03db255f5ef34a2a79eb20d52/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java#L310-L312



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-17 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2120:
---
Fix Version/s: (was: 0.4.1)
   0.5.0

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-17 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2120:
---
Fix Version/s: 0.4.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931223#comment-16931223
 ] 

Elek, Marton commented on HDDS-2137:


Thanks to work on this issue [~vjasani]. I added you to the HDDS project in 
jira as a contributor. Now you can be an assignee of any HDDS issues.

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2137:
--

Assignee: Viraj Jasani

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2078) Get/Renew DelegationToken NPE after HDDS-1909

2019-09-16 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2078:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Get/Renew DelegationToken NPE after HDDS-1909
> -
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2124) Random next links

2019-09-16 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2124:
---
Fix Version/s: 0.4.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Random next links 
> --
>
> Key: HDDS-2124
> URL: https://issues.apache.org/jira/browse/HDDS-2124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> _Next>>_ links at the bottom of some documentation pages seem to be out of 
> order.
>  * _Simple Single Ozone_ ("easy start") should link to one of the 
> intermediate level pages, but has no _Next_ link
>  * _Building From Sources_ (ninja) should be the last (no _Next_ link), but 
> points to _Minikube_ (intermediate)
>  * _Pseudo-cluster_ (intermediate) should point to the ninja level, but leads 
> to _Simple Single Ozone_ (easy start)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2109) Refactor scm.container.client config

2019-09-16 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2109:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Refactor scm.container.client config
> 
>
> Key: HDDS-2109
> URL: https://issues.apache.org/jira/browse/HDDS-2109
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM Client
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Extract typesafe config related to HDDS client with prefix 
> {{scm.container.client}}.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2096) Ozone ACL document missing AddAcl API

2019-09-16 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2096:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Ozone ACL document missing AddAcl API
> -
>
> Key: HDDS-2096
> URL: https://issues.apache.org/jira/browse/HDDS-2096
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Current Ozone Native ACL APIs document looks like below, the AddAcl is 
> missing.
>  
> h3. Ozone Native ACL APIs
> The ACLs can be manipulated by a set of APIs supported by Ozone. The APIs 
> supported are:
>  # *SetAcl* – This API will take user principal, the name, type of the ozone 
> object and a list of ACLs.
>  # *GetAcl* – This API will take the name and type of the ozone object and 
> will return a list of ACLs.
>  # *RemoveAcl* - This API will take the name, type of the ozone object and 
> the ACL that has to be removed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2044) Remove 'ozone' from the recon module names.

2019-09-16 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2044:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove 'ozone' from the recon module names.
> ---
>
> Key: HDDS-2044
> URL: https://issues.apache.org/jira/browse/HDDS-2044
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the module names "ozone-recon" and "ozone-recon-codegen". In order 
> to make them similar to other modules, they need to be changed into "recon" 
> and "recon-codegen"



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2121) Create a shaded ozone filesystem (client) jar

2019-09-16 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930483#comment-16930483
 ] 

Elek, Marton commented on HDDS-2121:


After HDDS-2120 we have only a limited number of 3rd party dependencies. I am 
not sure if this shading is required as we need to exclude many libraries from 
the shading.

I uploaded a patch where only the following libraries are shaded:

 * guava
 * distruptor
 * codehale metrics
 * picocli
 * netty

apache commons + protobuf are excluded.

The patch requires HDDS-2120. In the PR, both the commits are included.

> Create a shaded ozone filesystem (client) jar
> -
>
> Key: HDDS-2121
> URL: https://issues.apache.org/jira/browse/HDDS-2121
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need a shaded Ozonefs jar that does not include Hadoop ecosystem 
> components (Hadoop, HDFS, Ratis, Zookeeper).
> A common expected use case for Ozone is Hadoop clients (3.2.0 and later) 
> wanting to access Ozone via the Ozone Filesystem interface. For these 
> clients, we want to add Ozone file system jar to the classpath, however we 
> want to use Hadoop ecosystem dependencies that are `provided` and already 
> expected to be in the client classpath.
> Note that this is different from the legacy jar which bundles a shaded Hadoop 
> 3.2.0.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2121) Create a shaded ozone filesystem (client) jar

2019-09-16 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2121:
--

Assignee: Elek, Marton

> Create a shaded ozone filesystem (client) jar
> -
>
> Key: HDDS-2121
> URL: https://issues.apache.org/jira/browse/HDDS-2121
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Blocker
>
> We need a shaded Ozonefs jar that does not include Hadoop ecosystem 
> components (Hadoop, HDFS, Ratis, Zookeeper).
> A common expected use case for Ozone is Hadoop clients (3.2.0 and later) 
> wanting to access Ozone via the Ozone Filesystem interface. For these 
> clients, we want to add Ozone file system jar to the classpath, however we 
> want to use Hadoop ecosystem dependencies that are `provided` and already 
> expected to be in the client classpath.
> Note that this is different from the legacy jar which bundles a shaded Hadoop 
> 3.2.0.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929750#comment-16929750
 ] 

Elek, Marton commented on HDDS-2129:


AFAIK the problem is introduced with switching to javadoc 3.0 maven plugin. Now 
we need to ignore the warning in a different way.

Patch is uploaded (if you don't mind I also fix the assembly plugin version, 
let me know if you prefer to do it in a separated patch...)

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2129:
---
Status: Patch Available  (was: Open)

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929748#comment-16929748
 ] 

Elek, Marton commented on HDDS-2129:


Interesting: I tried it with the 56b7571131b which is the last commit before 
HDDS-2106 ad it's failing in the same way. Still investigating...

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2129:
--

Assignee: Elek, Marton

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2125) maven-javadoc-plugin.version is missing in pom.ozone.xml

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2125:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> maven-javadoc-plugin.version is missing in pom.ozone.xml
> 
>
> Key: HDDS-2125
> URL: https://issues.apache.org/jira/browse/HDDS-2125
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{maven-javadoc-plugin.version}} is missing from {{pom.ozone.xml}} which is 
> causing build failure.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929675#comment-16929675
 ] 

Elek, Marton commented on HDDS-2110:


Thank you to report this [~adeo]. I am not sure how Major this is as 
ProfilerServler is a developer only tool, but we can definitely restrict the 
download to the output directory. 

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2110:
---
Assignee: Elek, Marton
  Status: Patch Available  (was: Open)

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2110:
---
Summary: Arbitrary file can be downloaded with the help of ProfilerServlet  
(was: Arbitrary File Download)

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Priority: Major
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2111:
---
Status: Patch Available  (was: Open)

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929660#comment-16929660
 ] 

Elek, Marton commented on HDDS-2111:


Thanks [~adeo] to report it. I upload a PR. It fixed in two ways (using just 
window.location.pathname plus setting safer Content-Security-Policy)

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2111:
---
Summary: XSS fragments can be injected to the S3g landing page(was: DOM 
XSS)

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2111) DOM XSS

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2111:
--

Assignee: Elek, Marton

> DOM XSS
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2131) Optimize replication type and creation time calculation in S3 MPU list call

2019-09-13 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2131:
--

 Summary: Optimize replication type and creation time calculation 
in S3 MPU list call
 Key: HDDS-2131
 URL: https://issues.apache.org/jira/browse/HDDS-2131
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


Based on the review from [~bharatviswa]:

{code}
 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
  metadataManager.getOpenKeyTable();

  OmKeyInfo omKeyInfo =
  openKeyTable.get(upload.getDbKey());
{code}

{quote}Here we are reading openKeyTable only for getting creation time. If we 
can have this information in omMultipartKeyInfo, we could avoid DB calls for 
openKeyTable.

To do this, We can set creationTime in OmMultipartKeyInfo during 
initiateMultipartUpload . In this way, we can get all the required information 
from the MultipartKeyInfo table.

And also StorageClass is missing from the returned OmMultipartUpload, as 
listMultipartUploads shows StorageClass information. For this, if we can return 
replicationType and depending on this value, we can set StorageClass in the 
listMultipartUploads Response.
{quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2130) Add pagniation support to the S3 ListMPU call

2019-09-13 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2130:
--

 Summary: Add pagniation support to the S3 ListMPU call
 Key: HDDS-2130
 URL: https://issues.apache.org/jira/browse/HDDS-2130
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


HDDS-1054 introduced a simple implementation for the AWS S3 
ListMultipartUploads REST call.

However the pagination support (key-marker, max-uploads, upload-id-marker...) 
are missing. We should implement them in this jira.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2111) DOM XSS

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2111:
---
Description: 
VULNERABILITY DETAILS
There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
"window.location.href".

Considering a typical URL:

scheme://domain:port/path?query_string#fragment_id

Browsers encode correctly both "path" and "query_string", but not the 
"fragment_id". 

So if used "fragment_id" the vector is also not logged on Web Server.

VERSION
Chrome Version: 10.0.648.134 (Official Build 77917) beta

REPRODUCTION CASE
This is an index.html page:


{code:java}
aws s3api --endpoint 
document.write(window.location.href.replace("static/", "")) 
create-bucket --bucket=wordcount
{code}


The attack vector is:
index.html?#alert('XSS');

* PoC:
For your convenience, a minimalist PoC is located on:
http://security.onofri.org/xss_location.html?#alert('XSS');

* References
- DOM Based Cross-Site Scripting or XSS of the Third Kind - 
http://www.webappsec.org/projects/articles/071105.shtml


reference:- 

https://bugs.chromium.org/p/chromium/issues/detail?id=76796

  was:
VULNERABILITY DETAILS
There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
"window.location.href".

Considering a typical URL:

scheme://domain:port/path?query_string#fragment_id

Browsers encode correctly both "path" and "query_string", but not the 
"fragment_id". 

So if used "fragment_id" the vector is also not logged on Web Server.

VERSION
Chrome Version: 10.0.648.134 (Official Build 77917) beta

REPRODUCTION CASE
This is an index.html page:


{code:java}
aws s3api --endpoint 
document.write(window.location.href.replace("static/", "")) 
create-bucket --bucket=wordcount
{code}


The attack vector is:
index.html?#alert('XSS');

* PoC:
For your convenience, a minimalist PoC is located on:
http://security.onofri.org/xss_location.html?#alert('XSS');

* References
- DOM Based Cross-Site Scripting or XSS of the Third Kind - 
http://www.webappsec.org/projects/articles/071105.shtml


reference:- 

https://bugs.chromium.org/p/chromium/issues/detail?id=76796


> DOM XSS
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Priority: Major
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2121) Create a shaded ozone filesystem (client) jar

2019-09-12 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928993#comment-16928993
 ] 

Elek, Marton commented on HDDS-2121:


Thanks to open this issue [~arp] 

1. HDDS-2120 removes the included hadoop classes from the current jar (yes, 
it's very short, so it also can be merged to this one)
2. I agree with the idea: we can shade (package relocate) all the remaining 3rd 
party classes inside current to be sure they are not conflicting.


ps: legacy jar still required to support older version of spark/hadoop (if we 
would like to support them...)



> Create a shaded ozone filesystem (client) jar
> -
>
> Key: HDDS-2121
> URL: https://issues.apache.org/jira/browse/HDDS-2121
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Arpit Agarwal
>Priority: Blocker
>
> We need a shaded Ozonefs jar that does not include Hadoop ecosystem 
> components (Hadoop, HDFS, Ratis, Zookeeper).
> A common expected use case for Ozone is Hadoop clients (3.2.0 and later) 
> wanting to access Ozone via the Ozone Filesystem interface. For these 
> clients, we want to add Ozone file system jar to the classpath, however we 
> want to use Hadoop ecosystem dependencies that are `provided` and already 
> expected to be in the client classpath.
> Note that this is different from the legacy jar which bundles a shaded Hadoop 
> 3.2.0.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-12 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928797#comment-16928797
 ] 

Elek, Marton commented on HDDS-2120:


cc [~arpaga]

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-12 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2120:
---
Status: Patch Available  (was: Open)

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-12 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2120:
--

Assignee: Elek, Marton

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-12 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2120:
--

 Summary: Remove hadoop classes from ozonefs-current jar
 Key: HDDS-2120
 URL: https://issues.apache.org/jira/browse/HDDS-2120
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


We have two kind of ozone file system jars: current and legacy. current is 
designed to work only with exactly the same hadoop version which is used for 
compilation (3.2 as of now).

But as of now the hadoop classes are included in the current jar which is not 
necessary as the jar is expected to be used in an environment where  the hadoop 
classes (exactly the same hadoop classes) are already there. They can be 
excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928792#comment-16928792
 ] 

Elek, Marton commented on HDDS-2119:


Good idea. With separated checkstyle we will have the possibility to adjust the 
checkstyle rules (use longer lines for example...).

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2106) Avoid usage of hadoop projects as parent of hdds/ozone

2019-09-11 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2106:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Avoid usage of hadoop projects as parent of hdds/ozone
> --
>
> Key: HDDS-2106
> URL: https://issues.apache.org/jira/browse/HDDS-2106
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone uses hadoop as a dependency. The dependency defined on multiple level:
>  1. the hadoop artifacts are defined in the  sections
>  2. both hadoop-ozone and hadoop-hdds projects uses "hadoop-project" as the 
> parent
> As we already have a slightly different assembly process it could be more 
> resilient to use a dedicated parent project instead of the hadoop one. With 
> this approach it will be easier to upgrade the versions as we don't need to 
> be careful about the pom contents only about the used dependencies.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2106) Avoid usage of hadoop projects as parent of hdds/ozone

2019-09-10 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2106:
---
Status: Patch Available  (was: Open)

> Avoid usage of hadoop projects as parent of hdds/ozone
> --
>
> Key: HDDS-2106
> URL: https://issues.apache.org/jira/browse/HDDS-2106
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ozone uses hadoop as a dependency. The dependency defined on multiple level:
>  1. the hadoop artifacts are defined in the  sections
>  2. both hadoop-ozone and hadoop-hdds projects uses "hadoop-project" as the 
> parent
> As we already have a slightly different assembly process it could be more 
> resilient to use a dedicated parent project instead of the hadoop one. With 
> this approach it will be easier to upgrade the versions as we don't need to 
> be careful about the pom contents only about the used dependencies.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2106) Avoid usage of hadoop projects as parent of hdds/ozone

2019-09-09 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2106:
---
Priority: Blocker  (was: Major)

> Avoid usage of hadoop projects as parent of hdds/ozone
> --
>
> Key: HDDS-2106
> URL: https://issues.apache.org/jira/browse/HDDS-2106
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Blocker
>
> Ozone uses hadoop as a dependency. The dependency defined on multiple level:
>  1. the hadoop artifacts are defined in the  sections
>  2. both hadoop-ozone and hadoop-hdds projects uses "hadoop-project" as the 
> parent
> As we already have a slightly different assembly process it could be more 
> resilient to use a dedicated parent project instead of the hadoop one. With 
> this approach it will be easier to upgrade the versions as we don't need to 
> be careful about the pom contents only about the used dependencies.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2106) Avoid usage of hadoop projects as parent of hdds/ozone

2019-09-09 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2106:
--

Assignee: Elek, Marton

> Avoid usage of hadoop projects as parent of hdds/ozone
> --
>
> Key: HDDS-2106
> URL: https://issues.apache.org/jira/browse/HDDS-2106
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> Ozone uses hadoop as a dependency. The dependency defined on multiple level:
>  1. the hadoop artifacts are defined in the  sections
>  2. both hadoop-ozone and hadoop-hdds projects uses "hadoop-project" as the 
> parent
> As we already have a slightly different assembly process it could be more 
> resilient to use a dedicated parent project instead of the hadoop one. With 
> this approach it will be easier to upgrade the versions as we don't need to 
> be careful about the pom contents only about the used dependencies.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2106) Avoid usage of hadoop projects as parent of hdds/ozone

2019-09-09 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2106:
--

 Summary: Avoid usage of hadoop projects as parent of hdds/ozone
 Key: HDDS-2106
 URL: https://issues.apache.org/jira/browse/HDDS-2106
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


Ozone uses hadoop as a dependency. The dependency defined on multiple level:

 1. the hadoop artifacts are defined in the  sections
 2. both hadoop-ozone and hadoop-hdds projects uses "hadoop-project" as the 
parent

As we already have a slightly different assembly process it could be more 
resilient to use a dedicated parent project instead of the hadoop one. With 
this approach it will be easier to upgrade the versions as we don't need to be 
careful about the pom contents only about the used dependencies.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-09 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925893#comment-16925893
 ] 

Elek, Marton commented on HDDS-2101:


The problem is that the exact implementation depends from the current 
environment. In case of a legacy hadoop it should be BasicOzoneFileSystem for 
hadoop 3.2 it should be OzoneFileSystem...

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2090) Failing acceptance test - smoketests.ozonesecure-s3.MultipartUpload

2019-09-05 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2090:
--

Assignee: Elek, Marton

> Failing acceptance test - smoketests.ozonesecure-s3.MultipartUpload
> ---
>
> Key: HDDS-2090
> URL: https://issues.apache.org/jira/browse/HDDS-2090
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Elek, Marton
>Priority: Major
>
> {{"smoketests.ozonesecure-s3.MultipartUpload.Test Multipart Upload with the 
> simplified aws s3 cp API"}} acceptance test is failing.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2075) Tracing in OzoneManager call is propagated with wrong parent

2019-09-02 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2075:
---
Parent: HDDS-2066
Issue Type: Sub-task  (was: Bug)

> Tracing in OzoneManager call is propagated with wrong parent
> 
>
> Key: HDDS-2075
> URL: https://issues.apache.org/jira/browse/HDDS-2075
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Priority: Major
>
> As you can see in the attached screenshot the OzoneManager.createBucket 
> (server side) tracing information is the children of the freon.createBucket 
> instead of the freon OzoneManagerProtocolPB.submitRequest.
> To avoid confusion the hierarchy should be fixed (Most probably we generate 
> the child span AFTER we already serialized the parent one to the message) 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2075) Tracing in OzoneManager call is propagated with wrong parent

2019-09-02 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2075:
--

 Summary: Tracing in OzoneManager call is propagated with wrong 
parent
 Key: HDDS-2075
 URL: https://issues.apache.org/jira/browse/HDDS-2075
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton


As you can see in the attached screenshot the OzoneManager.createBucket (server 
side) tracing information is the children of the freon.createBucket instead of 
the freon OzoneManagerProtocolPB.submitRequest.

To avoid confusion the hierarchy should be fixed (Most probably we generate the 
child span AFTER we already serialized the parent one to the message) 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2074) Use annotations to define description/filter/required filters of an InsightPoint

2019-09-02 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2074:
--

 Summary: Use annotations to define description/filter/required 
filters of an InsightPoint
 Key: HDDS-2074
 URL: https://issues.apache.org/jira/browse/HDDS-2074
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton


InsightPoint interface defined the getDescription method to define the human 
readable description of the Insight point.

To have better separation between the provided log/metrics/config information 
and the metadata, it would be better to use an annotation for this which also 
can include the filters (HDDS-2071).

Something like this:

{code}
@InsightPoint(description="Information from the async event queue of the SCM", 
supportedFilters=["eventType"],requiredFilters="")
public class EventQueueInsight extends BaseInsightPoint {

...

}
{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2073) Make SCMSecurityProtocol message based

2019-09-02 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2073:
---
Description: 
We started to use a generic pattern where we have only one method in the grpc 
service and the main message contains all the required common information (eg. 
tracing).

SCMSecurityProtocol.proto is not yet migrated to this approach. To make our 
generic debug tool more powerful and unify our protocols I suggest to transform 
this protocol as well.

  was:
We started to use a generic pattern where we have only one method in the grpc 
service and the main message contains all the required common information (eg. 
tracing).

StorageContainerLocationProtocolService is not yet migrated to this approach. 
To make our generic debug tool more powerful and unify our protocols I suggest 
to transform this protocol as well.


> Make SCMSecurityProtocol message based
> --
>
> Key: HDDS-2073
> URL: https://issues.apache.org/jira/browse/HDDS-2073
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Elek, Marton
>Priority: Major
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> SCMSecurityProtocol.proto is not yet migrated to this approach. To make our 
> generic debug tool more powerful and unify our protocols I suggest to 
> transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2073) Make SCMSecurityProtocol message based

2019-09-02 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2073:
--

 Summary: Make SCMSecurityProtocol message based
 Key: HDDS-2073
 URL: https://issues.apache.org/jira/browse/HDDS-2073
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Reporter: Elek, Marton


We started to use a generic pattern where we have only one method in the grpc 
service and the main message contains all the required common information (eg. 
tracing).

StorageContainerLocationProtocolService is not yet migrated to this approach. 
To make our generic debug tool more powerful and unify our protocols I suggest 
to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-09-02 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2072:
---
Description: 
We started to use a generic pattern where we have only one method in the grpc 
service and the main message contains all the required common information (eg. 
tracing).

StorageContainerLocationProtocolService is not yet migrated to this approach. 
To make our generic debug tool more powerful and unify our protocols I suggest 
to transform this protocol as well.

  was:
We started to use a generic pattern where we have only one method in the grpc 
service and the main message contains all the required common information (eg. 
tracing).

StorageContainerDatanodeProtocolService is not yet migrated to this approach. 
To make our generic debug tool more powerful and unify our protocols I suggest 
to transform this protocol as well.


> Make StorageContainerLocationProtocolService message based
> --
>
> Key: HDDS-2072
> URL: https://issues.apache.org/jira/browse/HDDS-2072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Elek, Marton
>Priority: Major
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerLocationProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-09-02 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2072:
--

 Summary: Make StorageContainerLocationProtocolService message based
 Key: HDDS-2072
 URL: https://issues.apache.org/jira/browse/HDDS-2072
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Reporter: Elek, Marton


We started to use a generic pattern where we have only one method in the grpc 
service and the main message contains all the required common information (eg. 
tracing).

StorageContainerDatanodeProtocolService is not yet migrated to this approach. 
To make our generic debug tool more powerful and unify our protocols I suggest 
to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2071) Support filters in ozone insight point

2019-09-02 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2071:
--

 Summary: Support filters in ozone insight point
 Key: HDDS-2071
 URL: https://issues.apache.org/jira/browse/HDDS-2071
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton


With Ozone insight we can print out all the logs / metrics of one specific 
component s (eg. scm.node-manager or scm.node-manager).

It would be great to support additional filtering capabilities where the output 
is filtered based on specific keys.

For example to print out all of the logs related to one datanode or related to 
one type of RPC request.

Filter should be a key value map (eg. --filter 
datanode=sjdhfhf,rpc=createChunk) which can be defined in the ozone insight CLI.

As we have no option to add additional tags to the logs (it may be supported by 
log4j2 but not with slf4k), the first implementation can be implemented by 
pattern matching.

For example in SCMNodeManager.processNodeReport contains trace/debug logs which 
includes the " [datanode={}]" part. This formatting convention can be used to 
print out the only the related information. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2070) Create insight point to debug one specific pipeline

2019-09-02 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2070:
--

 Summary: Create insight point to debug one specific pipeline
 Key: HDDS-2070
 URL: https://issues.apache.org/jira/browse/HDDS-2070
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton


During the first demo of ozone insight tool we had a demo insight point to 
debug Ratis pipelines. It was not stable enough to include in the first patch 
but here we can add it.

The goal is to implement a new insight point (eg. datanode.pipeline) which can 
show information about one pipeline.

It can be done with retrieving the hosts of the pipeline and generate the 
loggers metrics (InsightPoint.getRelatedLoggers and InsightPoint.getMetrics) 
based on the pipeline information (same loggers should be displayed from all 
the three datanodes.

The pipeline id can be defined as a filter parameter which (in this case) 
should be required.
 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2068) Make StorageContainerDatanodeProtocolService message based

2019-09-02 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2068:
--

 Summary: Make StorageContainerDatanodeProtocolService message based
 Key: HDDS-2068
 URL: https://issues.apache.org/jira/browse/HDDS-2068
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Reporter: Elek, Marton


We started to use a generic pattern where we have only one method in the grpc 
service and the main message contains all the required common information (eg. 
tracing).

StorageContainerDatanodeProtocolService is not yet migrated to this approach. 
To make our generic debug tool more powerful and unify our protocols I suggest 
to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2067) Create generic service facade with tracing/metrics/logging support

2019-09-02 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2067:
--

 Summary: Create generic service facade with 
tracing/metrics/logging support
 Key: HDDS-2067
 URL: https://issues.apache.org/jira/browse/HDDS-2067
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton


We started to use a message based GRPC approach. Wen have only one method and 
the requests are routed based on a "type" field in the proto message. 

For example in OM protocol:

{code}
/**
 The OM service that takes care of Ozone namespace.
*/
service OzoneManagerService {
// A client-to-OM RPC to send client requests to OM Ratis server
rpc submitRequest(OMRequest)
  returns(OMResponse);
}
{code}

And 

{code}

message OMRequest {
  required Type cmdType = 1; // Type of the command

...
{code}

This approach makes it possible to use the same code to process incoming 
messages in the server side.

ScmBlockLocationProtocolServerSideTranslatorPB.send method contains the logic 
of:

 * Logging the request/response message (can be displayed with ozone insight)
 * Updated metrics
 * Handle open tracing context propagation.


These functions are generic. For example 
OzoneManagerProtocolServerSideTranslatorPB use the same (=similar) code.

The goal in this jira is to provide a generic utility and move the common code 
for tracing/request logging/response logging/metrics calculation to a common 
utility which can be used from all the ServerSide translators.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1935) Improve the visibility with Ozone Insight tool

2019-09-02 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1935:
---
Parent: HDDS-2066
Issue Type: Sub-task  (was: New Feature)

> Improve the visibility with Ozone Insight tool
> --
>
> Key: HDDS-1935
> URL: https://issues.apache.org/jira/browse/HDDS-1935
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Visibility is a key aspect for the operation of any Ozone cluster. We need 
> better visibility to improve correctnes and performance. While the 
> distributed tracing is a good tool for improving the visibility of 
> performance we have no powerful tool which can be used to check the internal 
> state of the Ozone cluster and debug certain correctness issues.
> To improve the visibility of the internal components I propose to introduce a 
> new command line application `ozone insight`.
> The new tool will show the selected metrics / logs / configuration for any of 
> the internal components (like replication-manager, pipeline, etc.).
> For each insight points we can define the required logs and log levels, 
> metrics and configuration and the tool can display only the component 
> specific information during the debug.
> h2. Usage
> First we can check the available insight point:
> {code}
> bash-4.2$ ozone insight list
> Available insight points:
>   scm.node-manager SCM Datanode management related 
> information.
>   scm.replica-manager  SCM closed container replication 
> manager
>   scm.event-queue  Information about the internal async 
> event delivery
>   scm.protocol.block-location  SCM Block location protocol endpoint
>   scm.protocol.container-location  Planned insight point which is not yet 
> implemented.
>   scm.protocol.datanodePlanned insight point which is not yet 
> implemented.
>   scm.protocol.securityPlanned insight point which is not yet 
> implemented.
>   scm.http Planned insight point which is not yet 
> implemented.
>   om.key-manager   OM Key Manager
>   om.protocol.client   Ozone Manager RPC endpoint
>   om.http  Planned insight point which is not yet 
> implemented.
>   datanode.pipeline[id]More information about one ratis 
> datanode ring.
>   datanode.rocksdb More information about one ratis 
> datanode ring.
>   s3g.http Planned insight point which is not yet 
> implemented.
> {code}
> Insight points can define configuration, metrics and/or logs. Configuration 
> can be displayed based on the configuration objects:
> {code}
> ozone insight config scm.protocol.block-location
> Configuration for `scm.protocol.block-location` (SCM Block location protocol 
> endpoint)
> >>> ozone.scm.block.client.bind.host
>default: 0.0.0.0
>current: 0.0.0.0
> The hostname or IP address used by the SCM block client  endpoint to bind
> >>> ozone.scm.block.client.port
>default: 9863
>current: 9863
> The port number of the Ozone SCM block client service.
> >>> ozone.scm.block.client.address
>default: ${ozone.scm.client.address}
>current: scm
> The address of the Ozone SCM block client service. If not defined value of 
> ozone.scm.client.address is used
> {code}
> Metrics can be retrieved from the prometheus entrypoint:
> {code}
> ozone insight metrics scm.protocol.block-location
> Metrics for `scm.protocol.block-location` (SCM Block location protocol 
> endpoint)
> RPC connections
>   Open connections: 0
>   Dropped connections: 0
>   Received bytes: 0
>   Sent bytes: 0
> RPC queue
>   RPC average queue time: 0.0
>   RPC call queue length: 0
> RPC performance
>   RPC processing time average: 0.0
>   Number of slow calls: 0
> Message type counters
>   Number of AllocateScmBlock: 0
>   Number of DeleteScmKeyBlocks: 0
>   Number of GetScmInfo: 2
>   Number of SortDatanodes: 0
> {code}
> Log levels can be adjusted with the existing logLevel servlet and can be 
> collected / streamd via a simple logstream servlet:
> {code}
> ozone insight log scm.node-manager
> [SCM] 2019-08-08 12:42:37,392 
> [DEBUG|org.apache.hadoop.hdds.scm.node.SCMNodeManager|SCMNodeManager] 
> Processing node report from [datanode=ozone_datanode_1.ozone_default]
> [SCM] 2019-08-08 12:43:37,392 
> [DEBUG|org.apache.hadoop.hdds.scm.node.SCMNodeManager|SCMNodeManager] 
> Processing node report from [datanode=ozone_datanode_1.ozone_default]
> [SCM] 2019-08-08 12:44:37,392 
> 

[jira] [Updated] (HDDS-1935) Improve the visibility with Ozone Insight tool

2019-09-02 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1935:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Improve the visibility with Ozone Insight tool
> --
>
> Key: HDDS-1935
> URL: https://issues.apache.org/jira/browse/HDDS-1935
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Visibility is a key aspect for the operation of any Ozone cluster. We need 
> better visibility to improve correctnes and performance. While the 
> distributed tracing is a good tool for improving the visibility of 
> performance we have no powerful tool which can be used to check the internal 
> state of the Ozone cluster and debug certain correctness issues.
> To improve the visibility of the internal components I propose to introduce a 
> new command line application `ozone insight`.
> The new tool will show the selected metrics / logs / configuration for any of 
> the internal components (like replication-manager, pipeline, etc.).
> For each insight points we can define the required logs and log levels, 
> metrics and configuration and the tool can display only the component 
> specific information during the debug.
> h2. Usage
> First we can check the available insight point:
> {code}
> bash-4.2$ ozone insight list
> Available insight points:
>   scm.node-manager SCM Datanode management related 
> information.
>   scm.replica-manager  SCM closed container replication 
> manager
>   scm.event-queue  Information about the internal async 
> event delivery
>   scm.protocol.block-location  SCM Block location protocol endpoint
>   scm.protocol.container-location  Planned insight point which is not yet 
> implemented.
>   scm.protocol.datanodePlanned insight point which is not yet 
> implemented.
>   scm.protocol.securityPlanned insight point which is not yet 
> implemented.
>   scm.http Planned insight point which is not yet 
> implemented.
>   om.key-manager   OM Key Manager
>   om.protocol.client   Ozone Manager RPC endpoint
>   om.http  Planned insight point which is not yet 
> implemented.
>   datanode.pipeline[id]More information about one ratis 
> datanode ring.
>   datanode.rocksdb More information about one ratis 
> datanode ring.
>   s3g.http Planned insight point which is not yet 
> implemented.
> {code}
> Insight points can define configuration, metrics and/or logs. Configuration 
> can be displayed based on the configuration objects:
> {code}
> ozone insight config scm.protocol.block-location
> Configuration for `scm.protocol.block-location` (SCM Block location protocol 
> endpoint)
> >>> ozone.scm.block.client.bind.host
>default: 0.0.0.0
>current: 0.0.0.0
> The hostname or IP address used by the SCM block client  endpoint to bind
> >>> ozone.scm.block.client.port
>default: 9863
>current: 9863
> The port number of the Ozone SCM block client service.
> >>> ozone.scm.block.client.address
>default: ${ozone.scm.client.address}
>current: scm
> The address of the Ozone SCM block client service. If not defined value of 
> ozone.scm.client.address is used
> {code}
> Metrics can be retrieved from the prometheus entrypoint:
> {code}
> ozone insight metrics scm.protocol.block-location
> Metrics for `scm.protocol.block-location` (SCM Block location protocol 
> endpoint)
> RPC connections
>   Open connections: 0
>   Dropped connections: 0
>   Received bytes: 0
>   Sent bytes: 0
> RPC queue
>   RPC average queue time: 0.0
>   RPC call queue length: 0
> RPC performance
>   RPC processing time average: 0.0
>   Number of slow calls: 0
> Message type counters
>   Number of AllocateScmBlock: 0
>   Number of DeleteScmKeyBlocks: 0
>   Number of GetScmInfo: 2
>   Number of SortDatanodes: 0
> {code}
> Log levels can be adjusted with the existing logLevel servlet and can be 
> collected / streamd via a simple logstream servlet:
> {code}
> ozone insight log scm.node-manager
> [SCM] 2019-08-08 12:42:37,392 
> [DEBUG|org.apache.hadoop.hdds.scm.node.SCMNodeManager|SCMNodeManager] 
> Processing node report from [datanode=ozone_datanode_1.ozone_default]
> [SCM] 2019-08-08 12:43:37,392 
> [DEBUG|org.apache.hadoop.hdds.scm.node.SCMNodeManager|SCMNodeManager] 
> Processing node report from [datanode=ozone_datanode_1.ozone_default]
> [SCM] 2019-08-08 12:44:37,392 
> 

[jira] [Created] (HDDS-2066) Improve the observability inside Ozone

2019-09-02 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2066:
--

 Summary: Improve the observability inside Ozone
 Key: HDDS-2066
 URL: https://issues.apache.org/jira/browse/HDDS-2066
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Tools
Reporter: Elek, Marton
Assignee: Elek, Marton


To improve the observability is a key requirement to achieve better correctness 
and performance with Ozone.

This jira collects some of the tasks which can provide better visibility to the 
ozone internals.

We have two main tools:

 * Distributed tracing (opentracing) can help to detected performance 
battlenecks
 * Ozone insight tool (a simple cli frontend for Hadoop metrics and log4j 
logging) can help to get better understanding about the current state/behavior 
of specific components.

Both of them can be improved to make it more powerful.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2054) Bad preamble for HttpChannelOverHttp In the Ozone

2019-09-01 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920620#comment-16920620
 ] 

Elek, Marton commented on HDDS-2054:


Thank you very much to report this problem [~Jack-Lee].

I tried to reproduce it but it worked well for me.

I have only two ideas:

 * Ozone may started slowly. The services need 10-30 seconds to startup. Can 
you please confirm that you have the same error after 1-2 minutes?
 * AWS credentials can be missing, or in wrong format: I am not sure about 
this, I have a configured, valid AWS key by default. Tried to delete it but 
didn't get the same error message. Can you please confirm if you set your aws 
credentials (or you already had one?)?

You can also upload a detailed log with using a debug flag:

{code}
aws s3api --debug --endpoint http://192.168.99.100:9878 create-bucket --bucket 
bucket1
{code}

It would help me to reproduce it locally as the whole request is printed out to 
the console with all the headers. (If you have any sensitive data in the log, 
for please remove it. AWS access key id can be there, but the secret is not 
only the signature which is generated by the secret).

> Bad preamble for HttpChannelOverHttp In the Ozone
> -
>
> Key: HDDS-2054
> URL: https://issues.apache.org/jira/browse/HDDS-2054
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client, Ozone Filesystem, Ozone Manager
>Affects Versions: 0.4.0
> Environment: MacOS
>Reporter: lqjacklee
>Priority: Minor
>
> Follow the guide : 
> https://cwiki.apache.org/confluence/display/HADOOP/Running+via+DockerHub 
> I have deploy the ozone in the docker. then execute the command 
> aws s3api --endpoint http://192.168.99.100:9878 create-bucket --bucket bucket1
> The logs shows :
> 2019-08-29 02:07:13 WARN  HttpParser:1454 - bad HTTP parsed: 400 Bad preamble 
> for HttpChannelOverHttp@49ddb402{r=0,c=false,a=IDLE,uri=null}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >