[jira] [Updated] (HDDS-2452) Wrong condition for re-scheduling in ReportPublisher

2019-11-13 Thread Sandeep Nemuri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-2452:
-
Status: Patch Available  (was: Open)

> Wrong condition for re-scheduling in ReportPublisher
> 
>
> Key: HDDS-2452
> URL: https://issues.apache.org/jira/browse/HDDS-2452
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Sandeep Nemuri
>Priority: Trivial
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It seems the condition for scheduling next run of {{ReportPublisher}} is 
> wrong:
> {code:title=https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java#L74-L76}
> if (!executor.isShutdown() ||
> !(context.getState() == DatanodeStates.SHUTDOWN)) {
>   executor.schedule(this,
> {code}
> Given the condition above, the task may be scheduled again if the executor is 
> shutdown, but the state machine is not set to shutdown (or vice versa).  I 
> think the condition should have an {{&&}}, not {{||}}.  (Currently it is 
> unlikely to happen, since [context state is set to shutdown before the report 
> executor|https://github.com/apache/hadoop-ozone/blob/f928a0bdb4ea2e5195da39256c6dda9f1c855649/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java#L392-L393].)
> [~nanda], can you please confirm if this is a typo or intentional?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2371) Print out the ozone version during the startup instead of hadoop version

2019-11-13 Thread Sandeep Nemuri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-2371:


Assignee: Sandeep Nemuri

> Print out the ozone version during the startup instead of hadoop version
> 
>
> Key: HDDS-2371
> URL: https://issues.apache.org/jira/browse/HDDS-2371
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> Ozone components printing out the current version during the startup:
>  
> {code:java}
> STARTUP_MSG: Starting StorageContainerManager
> STARTUP_MSG:   host = om/10.8.0.145
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 3.2.0
> STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r 
> e97acb3bd8f3befd27418996fa5d4b50bf2e17bf; compiled by 'sunilg' on 
> 2019-01-{code}
> But as it's visible the build / compiled information is about hadoop not 
> about hadoop-ozone.
> (And personally I prefer to use a github compatible url instead of the SVN 
> style -r. Something like:
> {code:java}
> STARTUP_MSG: build =  
> https://github.com/apache/hadoop-ozone/commit/8541c5694efebb58f53cf4665d3e4e6e4a12845c
>  ; compiled by '' on ...{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2452) Wrong condition for re-scheduling in ReportPublisher

2019-11-13 Thread Sandeep Nemuri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-2452:


Assignee: Sandeep Nemuri

> Wrong condition for re-scheduling in ReportPublisher
> 
>
> Key: HDDS-2452
> URL: https://issues.apache.org/jira/browse/HDDS-2452
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Sandeep Nemuri
>Priority: Trivial
>  Labels: newbie
>
> It seems the condition for scheduling next run of {{ReportPublisher}} is 
> wrong:
> {code:title=https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java#L74-L76}
> if (!executor.isShutdown() ||
> !(context.getState() == DatanodeStates.SHUTDOWN)) {
>   executor.schedule(this,
> {code}
> Given the condition above, the task may be scheduled again if the executor is 
> shutdown, but the state machine is not set to shutdown (or vice versa).  I 
> think the condition should have an {{&&}}, not {{||}}.  (Currently it is 
> unlikely to happen, since [context state is set to shutdown before the report 
> executor|https://github.com/apache/hadoop-ozone/blob/f928a0bdb4ea2e5195da39256c6dda9f1c855649/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java#L392-L393].)
> [~nanda], can you please confirm if this is a typo or intentional?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2460) Default checksum type is wrong in description

2019-11-13 Thread Sandeep Nemuri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-2460:
-
Status: Patch Available  (was: Open)

> Default checksum type is wrong in description
> -
>
> Key: HDDS-2460
> URL: https://issues.apache.org/jira/browse/HDDS-2460
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Attila Doroszlai
>Assignee: Sandeep Nemuri
>Priority: Trivial
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Default client checksum type is CRC32, but the config item's description says 
> it's SHA256 (leftover from HDDS-1149).  The description should be updated to 
> match the actual default value.
> {code:title=https://github.com/apache/hadoop-ozone/blob/a6f80c096b5320f50b6e9e9b4ba5f7c7e3544385/hadoop-hdds/common/src/main/resources/ozone-default.xml#L1489-L1497}
>   
> ozone.client.checksum.type
> CRC32
> OZONE, CLIENT, MANAGEMENT
> The checksum type [NONE/ CRC32/ CRC32C/ SHA256/ MD5] 
> determines
>   which algorithm would be used to compute checksum for chunk data.
>   Default checksum type is SHA256.
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2460) Default checksum type is wrong in description

2019-11-13 Thread Sandeep Nemuri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-2460:


Assignee: Sandeep Nemuri

> Default checksum type is wrong in description
> -
>
> Key: HDDS-2460
> URL: https://issues.apache.org/jira/browse/HDDS-2460
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Attila Doroszlai
>Assignee: Sandeep Nemuri
>Priority: Trivial
>  Labels: newbie
>
> Default client checksum type is CRC32, but the config item's description says 
> it's SHA256 (leftover from HDDS-1149).  The description should be updated to 
> match the actual default value.
> {code:title=https://github.com/apache/hadoop-ozone/blob/a6f80c096b5320f50b6e9e9b4ba5f7c7e3544385/hadoop-hdds/common/src/main/resources/ozone-default.xml#L1489-L1497}
>   
> ozone.client.checksum.type
> CRC32
> OZONE, CLIENT, MANAGEMENT
> The checksum type [NONE/ CRC32/ CRC32C/ SHA256/ MD5] 
> determines
>   which algorithm would be used to compute checksum for chunk data.
>   Default checksum type is SHA256.
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2403) Remove leftover reference to OUTPUT_FILE from shellcheck.sh

2019-11-13 Thread Sandeep Nemuri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-2403:
-
Status: Patch Available  (was: Open)

> Remove leftover reference to OUTPUT_FILE from shellcheck.sh
> ---
>
> Key: HDDS-2403
> URL: https://issues.apache.org/jira/browse/HDDS-2403
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Sandeep Nemuri
>Priority: Trivial
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{shellcheck.sh}} gives the following error (but works fine otherwise):
> {noformat}
> $ hadoop-ozone/dev-support/checks/shellcheck.sh
> hadoop-ozone/dev-support/checks/shellcheck.sh: line 23: : No such file or 
> directory
> ...
> {noformat}
> This happens because {{OUTPUT_FILE}} variable is undefined:
> {code:title=https://github.com/apache/hadoop-ozone/blob/6b2cda125b3647870ef5b01cf64e3b3e4cdc55db/hadoop-ozone/dev-support/checks/shellcheck.sh#L23}
> echo "" > "$OUTPUT_FILE"
> {code}
> The command can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2403) Remove leftover reference to OUTPUT_FILE from shellcheck.sh

2019-11-11 Thread Sandeep Nemuri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-2403:


Assignee: Sandeep Nemuri

> Remove leftover reference to OUTPUT_FILE from shellcheck.sh
> ---
>
> Key: HDDS-2403
> URL: https://issues.apache.org/jira/browse/HDDS-2403
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Sandeep Nemuri
>Priority: Trivial
>  Labels: newbie
>
> {{shellcheck.sh}} gives the following error (but works fine otherwise):
> {noformat}
> $ hadoop-ozone/dev-support/checks/shellcheck.sh
> hadoop-ozone/dev-support/checks/shellcheck.sh: line 23: : No such file or 
> directory
> ...
> {noformat}
> This happens because {{OUTPUT_FILE}} variable is undefined:
> {code:title=https://github.com/apache/hadoop-ozone/blob/6b2cda125b3647870ef5b01cf64e3b3e4cdc55db/hadoop-ozone/dev-support/checks/shellcheck.sh#L23}
> echo "" > "$OUTPUT_FILE"
> {code}
> The command can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2235) Ozone Datanode web page doesn't exist

2019-10-16 Thread Sandeep Nemuri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-2235:


Assignee: Sandeep Nemuri

> Ozone Datanode web page doesn't exist
> -
>
> Key: HDDS-2235
> URL: https://issues.apache.org/jira/browse/HDDS-2235
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Sandeep Nemuri
>Priority: Major
>
> On trying to access the dn UI, the following error is seen.
> http://dn_ip:9882/
> {code}
> HTTP ERROR 403
> Problem accessing /. Reason:
> Forbidden
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2218) Use OZONE_CLASSPATH instead of HADOOP_CLASSPATH

2019-10-16 Thread Sandeep Nemuri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-2218:


Assignee: Sandeep Nemuri

> Use OZONE_CLASSPATH instead of HADOOP_CLASSPATH
> ---
>
> Key: HDDS-2218
> URL: https://issues.apache.org/jira/browse/HDDS-2218
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbe
>
> HADOOP_CLASSPATH is the standard way to add additional jar files to the 
> classpath of the mapreduce/spark/.. .jobs. If something is added to the 
> HADOOP_CLASSPATH, than it should be on the classpath of the classic hadoop 
> daemons.
> But for the Ozone components we don't need any new jar files (cloud 
> connectors, libraries). I think it's more safe to separated HADOOP_CLASSPATH 
> from OZONE_CLASSPATH. If something is really need on the classpath for Ozone 
> daemons the dedicated environment variable should be used.
>  
> Most probably it can be fixed in
> hadoop-hdds/common/src/main/bin/hadoop-functions.sh
> And the hadoop-ozone/dev/src/main/compose files also should be checked (some 
> of them contain HADOOP_CLASSPATH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1717) MR Job fails with exception

2019-06-24 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871063#comment-16871063
 ] 

Sandeep Nemuri commented on HDDS-1717:
--

I guess this is similar to HDDS-1305 ?

> MR Job fails with exception
> ---
>
> Key: HDDS-1717
> URL: https://issues.apache.org/jira/browse/HDDS-1717
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
> Environment: Ozone : 10 Node (1 SCM, 1 OM, 10 DN)
> HDP : 5 Node
> Both cluster are on separate nodes and hosted on HDP Ycloud.
>Reporter: Soumitra Sulav
>Priority: Blocker
> Attachments: syslog_mapred.err
>
>
> Mapreduce Jobs are failing with exception ??Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient exception??
> Ozone hadoop-ozone-filesystem-lib-current.jar copied to HDP cluster's hadoop 
> and mapreduce classpath under :
> {code:java}
> /usr/hdp/3.1.0.0-78/hadoop/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> {code}
> Excerpt from exception :
> {code:java}
> 2019-06-21 10:07:57,982 ERROR [main] 
> org.apache.hadoop.ozone.client.OzoneClientFactory: Couldn't create protocol 
> class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:134)
>   at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:103)
>   at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:143)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:160)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:116)
>   at 
> org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.createFileOutputCommitter(PathOutputCommitterFactory.java:134)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitterFactory.createOutputCommitter(FileOutputCommitterFactory.java:35)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:338)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:552)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:534)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1802)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:534)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:311)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$6.run(MRAppMaster.java:1760)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1757)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1691)
> Caused by: java.lang.VerifyError: Cannot inherit from final class
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> 

[jira] [Commented] (HDDS-1305) Robot test containers: hadoop client can't access o3fs

2019-04-22 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823021#comment-16823021
 ] 

Sandeep Nemuri commented on HDDS-1305:
--

[~jnp], it's hadoop 3.1.0 (apache docker image is being used here).
{code:java}
docker exec -it ozonefs_hadoop3_1 /bin/bash
bash-4.4$ hadoop version
Hadoop 3.1.0
Source code repository https://github.com/apache/hadoop -r 
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
Compiled by centos on 2018-03-30T00:00Z
Compiled with protoc 2.5.0
>From source with checksum 14182d20c972b3e2105580a1ad6990
This command was run using 
/opt/hadoop/share/hadoop/common/hadoop-common-3.1.0.jar
{code}
{code:java}
bash-4.4$ hadoop classpath
/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
bash-4.4$

{code}

> Robot test containers: hadoop client can't access o3fs
> --
>
> Key: HDDS-1305
> URL: https://issues.apache.org/jira/browse/HDDS-1305
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Sandeep Nemuri
>Assignee: Anu Engineer
>Priority: Major
> Attachments: run.log
>
>
> Run the robot test using:
> {code:java}
> ./test.sh --keep --env ozonefs
> {code}
> login to OM container and check if we have desired volume/bucket/key got 
> created with robot tests.
> {code:java}
> [root@o3new ~]$ docker exec -it ozonefs_om_1 /bin/bash
> bash-4.2$ ozone fs -ls o3fs://bucket1.fstest/
> Found 3 items
> -rw-rw-rw-   1 hadoop hadoop  22990 2019-03-15 17:28 
> o3fs://bucket1.fstest/KEY.txt
> drwxrwxrwx   - hadoop hadoop  0 1970-01-01 00:00 
> o3fs://bucket1.fstest/testdir
> drwxrwxrwx   - hadoop hadoop  0 2019-03-15 17:27 
> o3fs://bucket1.fstest/testdir1
> {code}
> {code:java}
> [root@o3new ~]$ docker exec -it ozonefs_hadoop3_1 /bin/bash
> bash-4.4$ hadoop classpath
> /opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> bash-4.4$ hadoop fs -ls o3fs://bucket1.fstest/
> 2019-03-18 19:12:42 INFO  Configuration:3204 - Removed undeclared tags:
> 2019-03-18 19:12:42 ERROR OzoneClientFactory:294 - Couldn't create protocol 
> class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
>   at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:127)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:189)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:249)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:232)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> 

[jira] [Updated] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1417:
-
Priority: Blocker  (was: Major)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Blocker
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1315) datanode process dies if it runs out of disk space

2019-04-05 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1315:
-
Description: 
As of now the datanode process dies if it runs out of disk space which makes 
the data present in that DN inaccessible.

datanode logs: 

{code:java}
2019-03-11 04:01:27,141 ERROR org.apache.ratis.server.storage.RaftLogWorker: 
Terminating with exit status 1: 
fb635e52-e2eb-46b1-b109-a831c10d3bf8-RaftLogWorker failed.
java.io.FileNotFoundException: 
/opt/data/meta/ratis/68e315f3-312c-4c9f-a7bd-590194deb5e7/current/log_inprogress_8705582
 (No space left on device)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.ratis.server.storage.LogOutputStream.(LogOutputStream.java:66)
at 
org.apache.ratis.server.storage.RaftLogWorker$StartLogSegment.execute(RaftLogWorker.java:436)
at 
org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
at java.lang.Thread.run(Thread.java:745)

{code}


{code:java}
2019-03-11 04:01:25,531 [grpc-default-executor-9192] INFO   - Operation: 
WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
/opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/88/chunks/ba29bb91559179cbf7ab5d86cac47ba1_stream_9fb1e802-dca6-46e0-be12-5ac743d8563d_chunk_1.tmp.11076.8705539:
 No space left on device : Result: IO_EXCEPTION
2019-03-11 04:01:25,543 [grpc-default-executor-9192] INFO   - Operation: 
WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
/opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/86/chunks/19ef3c1d36eadbc9538116c68c6e494f_stream_c58e8b91-dc18-4b61-918f-ab1eeda41c02_chunk_1.tmp.11076.8705540:
 No space left on device : Result: IO_EXCEPTION
2019-03-11 04:01:25,546 [grpc-default-executor-9192] INFO   - Operation: 
WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
/opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/87/chunks/83a6a81f2f703f49a7e0a1413eebfc4c_stream_cae1ed30-c613-4278-8404-c9e37d0b690f_chunk_1.tmp.11076.8705541:
 No space left on device : Result: IO_EXCEPTION

{code}


  was:
As of now the datanode process dies if it runs out of disk space which makes 
the data present in that DN is inaccessible.

datanode logs: 

{code:java}
2019-03-11 04:01:27,141 ERROR org.apache.ratis.server.storage.RaftLogWorker: 
Terminating with exit status 1: 
fb635e52-e2eb-46b1-b109-a831c10d3bf8-RaftLogWorker failed.
java.io.FileNotFoundException: 
/opt/data/meta/ratis/68e315f3-312c-4c9f-a7bd-590194deb5e7/current/log_inprogress_8705582
 (No space left on device)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.ratis.server.storage.LogOutputStream.(LogOutputStream.java:66)
at 
org.apache.ratis.server.storage.RaftLogWorker$StartLogSegment.execute(RaftLogWorker.java:436)
at 
org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
at java.lang.Thread.run(Thread.java:745)

{code}


{code:java}
2019-03-11 04:01:25,531 [grpc-default-executor-9192] INFO   - Operation: 
WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
/opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/88/chunks/ba29bb91559179cbf7ab5d86cac47ba1_stream_9fb1e802-dca6-46e0-be12-5ac743d8563d_chunk_1.tmp.11076.8705539:
 No space left on device : Result: IO_EXCEPTION
2019-03-11 04:01:25,543 [grpc-default-executor-9192] INFO   - Operation: 
WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
/opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/86/chunks/19ef3c1d36eadbc9538116c68c6e494f_stream_c58e8b91-dc18-4b61-918f-ab1eeda41c02_chunk_1.tmp.11076.8705540:
 No space left on device : Result: IO_EXCEPTION
2019-03-11 04:01:25,546 [grpc-default-executor-9192] INFO   - Operation: 
WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
/opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/87/chunks/83a6a81f2f703f49a7e0a1413eebfc4c_stream_cae1ed30-c613-4278-8404-c9e37d0b690f_chunk_1.tmp.11076.8705541:
 No space left on device : Result: IO_EXCEPTION

{code}



> datanode process dies if it runs out of disk space
> --
>
> Key: HDDS-1315
> URL: https://issues.apache.org/jira/browse/HDDS-1315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Sandeep Nemuri
>Assignee: Supratim Deka
>Priority: 

[jira] [Updated] (HDDS-1363) ozone.metadata.dirs doesn't pick multiple dirs

2019-04-01 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1363:
-
Summary: ozone.metadata.dirs doesn't pick multiple dirs  (was: 
ozone.metadata.dirs doesn't pick comma(,) separated paths)

> ozone.metadata.dirs doesn't pick multiple dirs
> --
>
> Key: HDDS-1363
> URL: https://issues.apache.org/jira/browse/HDDS-1363
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Sandeep Nemuri
>Priority: Major
>
> {{ozone.metadata.dirs}} doesn't pick comma(,) separated paths.
>  It only picks one path as opposed to the property name 
> _ozone.metadata.dir{color:#FF}s{color}_
> {code:java}
>
>   ozone.metadata.dirs
>   /data/data1/meta,/home/hdfs/data/meta
>
> {code}
> {code:java}
> 2019-03-31 18:44:54,824 WARN server.ServerUtils: ozone.scm.db.dirs is not 
> configured. We recommend adding this setting. Falling back to 
> ozone.metadata.dirs instead.
> SCM initialization succeeded.Current cluster id for 
> sd=/data/data1/meta,/home/hdfs/data/meta/scm;cid=CID-1ad502d1-0104-4055-838b-1208ab78f35c
> 2019-03-31 18:44:55,079 INFO server.StorageContainerManager: SHUTDOWN_MSG:
> {code}
> {code:java}
> [hdfs@localhost ozone-0.5.0-SNAPSHOT]$ ls 
> //data/data1/meta,/home/hdfs/data/meta/scm/current/VERSION
> VERSION
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1363) ozone.metadata.dirs doesn't pick comma(,) separated paths

2019-04-01 Thread Sandeep Nemuri (JIRA)
Sandeep Nemuri created HDDS-1363:


 Summary: ozone.metadata.dirs doesn't pick comma(,) separated paths
 Key: HDDS-1363
 URL: https://issues.apache.org/jira/browse/HDDS-1363
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Sandeep Nemuri


{{ozone.metadata.dirs}} doesn't pick comma(,) separated paths.
 It only picks one path as opposed to the property name 
_ozone.metadata.dir{color:#FF}s{color}_
{code:java}
   
  ozone.metadata.dirs
  /data/data1/meta,/home/hdfs/data/meta
   
{code}
{code:java}
2019-03-31 18:44:54,824 WARN server.ServerUtils: ozone.scm.db.dirs is not 
configured. We recommend adding this setting. Falling back to 
ozone.metadata.dirs instead.
SCM initialization succeeded.Current cluster id for 
sd=/data/data1/meta,/home/hdfs/data/meta/scm;cid=CID-1ad502d1-0104-4055-838b-1208ab78f35c
2019-03-31 18:44:55,079 INFO server.StorageContainerManager: SHUTDOWN_MSG:
{code}
{code:java}
[hdfs@localhost ozone-0.5.0-SNAPSHOT]$ ls 
//data/data1/meta,/home/hdfs/data/meta/scm/current/VERSION
VERSION
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-26 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801569#comment-16801569
 ] 

Sandeep Nemuri commented on HDDS-1310:
--

Thank you [~nandakumar131] and [~ajayydv]. 

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Fix For: 0.4.0, 0.5.0
>
> Attachments: HDDS-1310.001.patch, HDDS-1310.002.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-25 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800977#comment-16800977
 ] 

Sandeep Nemuri commented on HDDS-1310:
--

[~ajayydv], only {{TestFailureHandlingByClient}} is failing when tested locally 
and even that failure is not related to this patch.

 

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch, HDDS-1310.002.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-20 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797487#comment-16797487
 ] 

Sandeep Nemuri commented on HDDS-1310:
--

Failed junit tests seems not related to this patch. 

Attaching v2 with improved test: [^HDDS-1310.002.patch] 

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch, HDDS-1310.002.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-20 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1310:
-
Attachment: (was: HDDS-1310.002.patch)

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch, HDDS-1310.002.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-20 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1310:
-
Attachment: HDDS-1310.002.patch

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch, HDDS-1310.002.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-20 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1310:
-
Attachment: HDDS-1310.002.patch

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch, HDDS-1310.002.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-20 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1310:
-
Attachment: HDDS-1310.001.patch

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1315) datanode process dies if it runs out of disk space

2019-03-20 Thread Sandeep Nemuri (JIRA)
Sandeep Nemuri created HDDS-1315:


 Summary: datanode process dies if it runs out of disk space
 Key: HDDS-1315
 URL: https://issues.apache.org/jira/browse/HDDS-1315
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Sandeep Nemuri


As of now the datanode process dies if it runs out of disk space which makes 
the data present in that DN is inaccessible.

datanode logs: 

{code:java}
2019-03-11 04:01:27,141 ERROR org.apache.ratis.server.storage.RaftLogWorker: 
Terminating with exit status 1: 
fb635e52-e2eb-46b1-b109-a831c10d3bf8-RaftLogWorker failed.
java.io.FileNotFoundException: 
/opt/data/meta/ratis/68e315f3-312c-4c9f-a7bd-590194deb5e7/current/log_inprogress_8705582
 (No space left on device)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.ratis.server.storage.LogOutputStream.(LogOutputStream.java:66)
at 
org.apache.ratis.server.storage.RaftLogWorker$StartLogSegment.execute(RaftLogWorker.java:436)
at 
org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
at java.lang.Thread.run(Thread.java:745)

{code}


{code:java}
2019-03-11 04:01:25,531 [grpc-default-executor-9192] INFO   - Operation: 
WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
/opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/88/chunks/ba29bb91559179cbf7ab5d86cac47ba1_stream_9fb1e802-dca6-46e0-be12-5ac743d8563d_chunk_1.tmp.11076.8705539:
 No space left on device : Result: IO_EXCEPTION
2019-03-11 04:01:25,543 [grpc-default-executor-9192] INFO   - Operation: 
WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
/opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/86/chunks/19ef3c1d36eadbc9538116c68c6e494f_stream_c58e8b91-dc18-4b61-918f-ab1eeda41c02_chunk_1.tmp.11076.8705540:
 No space left on device : Result: IO_EXCEPTION
2019-03-11 04:01:25,546 [grpc-default-executor-9192] INFO   - Operation: 
WriteChunk : Trace ID:  : Message: java.nio.file.FileSystemException: 
/opt/data/hdds/a83a7108-91c7-4357-9f68-46753641d429/current/containerDir0/87/chunks/83a6a81f2f703f49a7e0a1413eebfc4c_stream_cae1ed30-c613-4278-8404-c9e37d0b690f_chunk_1.tmp.11076.8705541:
 No space left on device : Result: IO_EXCEPTION

{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-20 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1310:
-
Attachment: (was: HDDS-1310.001.patch)

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-20 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797390#comment-16797390
 ] 

Sandeep Nemuri commented on HDDS-1310:
--

Attaching patch with necessary changes: [^HDDS-1310.001.patch] 
Kindly review.

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-20 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1310:
-
Status: Patch Available  (was: Open)

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-20 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1310:
-
Attachment: HDDS-1310.001.patch

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-19 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-1310:


Assignee: Sandeep Nemuri

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Major
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-19 Thread Sandeep Nemuri (JIRA)
Sandeep Nemuri created HDDS-1310:


 Summary: In datanode once a container becomes unhealthy, datanode 
restart fails.
 Key: HDDS-1310
 URL: https://issues.apache.org/jira/browse/HDDS-1310
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.3.0
Reporter: Sandeep Nemuri


When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
of that datanode fails as it cannot generate ContainerReports anymore. 
Unhealthy state of a container is not handled in ContainerReport generation 
inside a datanode.

We get the below exception when a datanode tries to generate the 
ContainerReport which contains unhealthy container(s)
{noformat}
2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - Unable 
to communicate to SCM server at x.x.xxx:9861 for past 3300 seconds.
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Invalid Container state found: 86
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
at 
org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
at 
org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
at 
org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
at 
org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1305) Robot test containers: hadoop client can't access o3fs

2019-03-19 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1305:
-
Summary: Robot test containers: hadoop client can't access o3fs  (was: 
Robot test containers hadoop client can't access o3fs)

> Robot test containers: hadoop client can't access o3fs
> --
>
> Key: HDDS-1305
> URL: https://issues.apache.org/jira/browse/HDDS-1305
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Sandeep Nemuri
>Priority: Major
> Attachments: run.log
>
>
> Run the robot test using:
> {code:java}
> ./test.sh --keep --env ozonefs
> {code}
> login to OM container and check if we have desired volume/bucket/key got 
> created with robot tests.
> {code:java}
> [root@o3new ~]$ docker exec -it ozonefs_om_1 /bin/bash
> bash-4.2$ ozone fs -ls o3fs://bucket1.fstest/
> Found 3 items
> -rw-rw-rw-   1 hadoop hadoop  22990 2019-03-15 17:28 
> o3fs://bucket1.fstest/KEY.txt
> drwxrwxrwx   - hadoop hadoop  0 1970-01-01 00:00 
> o3fs://bucket1.fstest/testdir
> drwxrwxrwx   - hadoop hadoop  0 2019-03-15 17:27 
> o3fs://bucket1.fstest/testdir1
> {code}
> {code:java}
> [root@o3new ~]$ docker exec -it ozonefs_hadoop3_1 /bin/bash
> bash-4.4$ hadoop classpath
> /opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
> bash-4.4$ hadoop fs -ls o3fs://bucket1.fstest/
> 2019-03-18 19:12:42 INFO  Configuration:3204 - Removed undeclared tags:
> 2019-03-18 19:12:42 ERROR OzoneClientFactory:294 - Couldn't create protocol 
> class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
>   at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:127)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:189)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:249)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:232)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> Caused by: java.lang.VerifyError: Cannot inherit from final class
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

[jira] [Commented] (HDDS-875) Use apache hadoop docker image for the ozonefs cluster definition

2019-03-18 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795332#comment-16795332
 ] 

Sandeep Nemuri commented on HDDS-875:
-

Looks like ozonefs+hdfs integration is broken with current setup. Created 
HDDS-1305.
Once it is fixed will update the image and add robot tests. 

> Use apache hadoop docker image for the ozonefs cluster definition
> -
>
> Key: HDDS-875
> URL: https://issues.apache.org/jira/browse/HDDS-875
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>
> In HDDS-223 we switched from the external flokkr/hadoop image to use the 
> apache/hadoop images for the acceptance test of ozone.
> As [~msingh] pointed to me the compose/ozonefs folder still use flokkr/hadoop 
> image.
> It could be easy to switch to the latest apache hadoop image.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1305) Robot test containers hadoop client can't access o3fs

2019-03-18 Thread Sandeep Nemuri (JIRA)
Sandeep Nemuri created HDDS-1305:


 Summary: Robot test containers hadoop client can't access o3fs
 Key: HDDS-1305
 URL: https://issues.apache.org/jira/browse/HDDS-1305
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Sandeep Nemuri
 Attachments: run.log

Run the robot test using:
{code:java}
./test.sh --keep --env ozonefs
{code}

login to OM container and check if we have desired volume/bucket/key got 
created with robot tests.
{code:java}
[root@o3new ~]$ docker exec -it ozonefs_om_1 /bin/bash
bash-4.2$ ozone fs -ls o3fs://bucket1.fstest/
Found 3 items
-rw-rw-rw-   1 hadoop hadoop  22990 2019-03-15 17:28 
o3fs://bucket1.fstest/KEY.txt
drwxrwxrwx   - hadoop hadoop  0 1970-01-01 00:00 
o3fs://bucket1.fstest/testdir
drwxrwxrwx   - hadoop hadoop  0 2019-03-15 17:27 
o3fs://bucket1.fstest/testdir1
{code}
{code:java}
[root@o3new ~]$ docker exec -it ozonefs_hadoop3_1 /bin/bash
bash-4.4$ hadoop classpath
/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
bash-4.4$ hadoop fs -ls o3fs://bucket1.fstest/
2019-03-18 19:12:42 INFO  Configuration:3204 - Removed undeclared tags:
2019-03-18 19:12:42 ERROR OzoneClientFactory:294 - Couldn't create protocol 
class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:127)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:189)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:249)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:232)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
Caused by: java.lang.VerifyError: Cannot inherit from final class
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.(OzoneManagerProtocolClientSideTranslatorPB.java:169)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:142)
... 23 more
ls: Couldn't create protocol class org.apache.hadoop.ozone.client.rpc.RpcClient
2019-03-18 19:12:42 INFO  Configuration:3204 - Removed undeclared tags:
bash-4.4$

[jira] [Commented] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2019-02-27 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779169#comment-16779169
 ] 

Sandeep Nemuri commented on HDDS-216:
-

[~ajayydv], Moving the version to 0.5 for now, Will sync up with Mukul and try 
to finish this before 0.4 release. Apologies for the delay.

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: alpha2, newbie, test
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2019-02-27 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-216:

Target Version/s: 0.5.0  (was: 0.4.0, 0.5.0)

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: alpha2, newbie, test
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2019-02-27 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-216:

Target Version/s: 0.4.0, 0.5.0  (was: 0.4.0)

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: alpha2, newbie, test
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-12 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1093:
-
Component/s: SCM

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Reporter: Sandeep Nemuri
>Priority: Major
> Attachments: image-2019-02-12-19-47-18-332.png
>
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-12 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1093:
-
Description: 
Configuration tab in OM/SCM ui is not displaying the correct/configured values, 
rather it is displaying the default values.

!image-2019-02-12-19-47-18-332.png!
{code:java}
[hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
ozone.om.handler.count.key
ozone.om.handler.count.key40falseozone-site.xml
{code}

  was:
Configuration tab in OM ui is not displaying the correct/configured values, 
rather it is displaying the default values.

!image-2019-02-12-19-47-18-332.png!


{code:java}
[hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
ozone.om.handler.count.key
ozone.om.handler.count.key40falseozone-site.xml
{code}



> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM
>Reporter: Sandeep Nemuri
>Priority: Major
> Attachments: image-2019-02-12-19-47-18-332.png
>
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-12 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1093:
-
Summary: Configuration tab in OM/SCM ui is not displaying the correct 
values  (was: Configuration tab in OM ui is not displaying the correct values)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM
>Reporter: Sandeep Nemuri
>Priority: Major
> Attachments: image-2019-02-12-19-47-18-332.png
>
>
> Configuration tab in OM ui is not displaying the correct/configured values, 
> rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1093) Configuration tab in OM ui is not displaying the correct values

2019-02-12 Thread Sandeep Nemuri (JIRA)
Sandeep Nemuri created HDDS-1093:


 Summary: Configuration tab in OM ui is not displaying the correct 
values
 Key: HDDS-1093
 URL: https://issues.apache.org/jira/browse/HDDS-1093
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: OM
Reporter: Sandeep Nemuri
 Attachments: image-2019-02-12-19-47-18-332.png

Configuration tab in OM ui is not displaying the correct/configured values, 
rather it is displaying the default values.

!image-2019-02-12-19-47-18-332.png!


{code:java}
[hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
ozone.om.handler.count.key
ozone.om.handler.count.key40falseozone-site.xml
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-631) Ozone classpath shell command is not working

2019-01-30 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755812#comment-16755812
 ] 

Sandeep Nemuri edited comment on HDDS-631 at 1/30/19 9:15 AM:
--

+1 for {{ozone classpath client}}, Without this we have to manually find the 
lib location and add it to client(hadoop) classpath.

When hdfs is trying to access o3fs, If we only add 
{{hadoop-ozone-filesystem.jar}} to HADOOP_CLASSPATH. The client fails with
{noformat}
Caused by: java.lang.ClassNotFoundException: 
org.apache.ratis.thirdparty.com.google.protobuf.ByteString
{noformat}
Apart form {{hadoop-ozone-filesystem.jar}} we had to add the ozone lib 
directory to access o3fs from hdfs.


was (Author: sandeep nemuri):
+1 for {{ozone classpath clients}}, Without this we have to manually find the 
lib location and add it to client(hadoop) classpath.

When hdfs is trying to access o3fs, If we only add 
{{hadoop-ozone-filesystem.jar}} to HADOOP_CLASSPATH. The client fails with
{noformat}
Caused by: java.lang.ClassNotFoundException: 
org.apache.ratis.thirdparty.com.google.protobuf.ByteString
{noformat}
Apart form {{hadoop-ozone-filesystem.jar}} we had to add the ozone lib 
directory to access o3fs from hdfs.

> Ozone classpath shell command is not working
> 
>
> Key: HDDS-631
> URL: https://issues.apache.org/jira/browse/HDDS-631
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-631.00.patch
>
>
> In the ozone package (tar) the ozone and its dependency jars are copied to an 
> incorrect location. We use to have the jars in {{share/hadoop/}} for 
> each module. Those directories are empty now. All the jars are placed in 
> {{share/ozone/lib}} directory.
> With this structure when we run {{ozone classpath}} command, we get an 
> incorrect output.
> {code}
> $ bin/ozone classpath
> /Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/etc/hadoop:/Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/share/hadoop/common/*
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-631) Ozone classpath shell command is not working

2019-01-30 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755812#comment-16755812
 ] 

Sandeep Nemuri commented on HDDS-631:
-

+1 for {{ozone classpath clients}}, Without this we have to manually find the 
lib location and add it to client(hadoop) classpath.

When hdfs is trying to access o3fs, If we only add 
{{hadoop-ozone-filesystem.jar}} to HADOOP_CLASSPATH. The client fails with
{noformat}
Caused by: java.lang.ClassNotFoundException: 
org.apache.ratis.thirdparty.com.google.protobuf.ByteString
{noformat}
Apart form {{hadoop-ozone-filesystem.jar}} we had to add the ozone lib 
directory to access o3fs from hdfs.

> Ozone classpath shell command is not working
> 
>
> Key: HDDS-631
> URL: https://issues.apache.org/jira/browse/HDDS-631
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-631.00.patch
>
>
> In the ozone package (tar) the ozone and its dependency jars are copied to an 
> incorrect location. We use to have the jars in {{share/hadoop/}} for 
> each module. Those directories are empty now. All the jars are placed in 
> {{share/ozone/lib}} directory.
> With this structure when we run {{ozone classpath}} command, we get an 
> incorrect output.
> {code}
> $ bin/ozone classpath
> /Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/etc/hadoop:/Users/nvadivelu/apache/ozone-0.4.0-SNAPSHOT/share/hadoop/common/*
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-875) Use apache hadoop docker image for the ozonefs cluster definition

2018-11-27 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700578#comment-16700578
 ] 

Sandeep Nemuri commented on HDDS-875:
-

Thanks [~elek], Would be interested to work on this.

Below is the instance we have flokkr: 

 
{code:java}
hadooplast:
 image: flokkr/hadoop:3.1.0
 volumes:
 - ../..:/opt/ozone
 env_file:
 - ./docker-config
 environment:
 HADOOP_CLASSPATH: /opt/ozone/share/hadoop/ozonefs/*.jar
 command: ["watch","-n","10","ls"]
{code}
Looking at the ozonefs.robot file, I don't think we are using this `hadooplast` 
image anywhere in the tests. Do let me know if we need this container (if yes, 
should it be a hadoop image?) or is it ok to remove the container definition 
from docker-compose.

 

> Use apache hadoop docker image for the ozonefs cluster definition
> -
>
> Key: HDDS-875
> URL: https://issues.apache.org/jira/browse/HDDS-875
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>
> In HDDS-223 we switched from the external flokkr/hadoop image to use the 
> apache/hadoop images for the acceptance test of ozone.
> As [~msingh] pointed to me the compose/ozonefs folder still use flokkr/hadoop 
> image.
> It could be easy to switch to the latest apache hadoop image.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-875) Use apache hadoop docker image for the ozonefs cluster definition

2018-11-27 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-875:
---

Assignee: Sandeep Nemuri

> Use apache hadoop docker image for the ozonefs cluster definition
> -
>
> Key: HDDS-875
> URL: https://issues.apache.org/jira/browse/HDDS-875
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>
> In HDDS-223 we switched from the external flokkr/hadoop image to use the 
> apache/hadoop images for the acceptance test of ozone.
> As [~msingh] pointed to me the compose/ozonefs folder still use flokkr/hadoop 
> image.
> It could be easy to switch to the latest apache hadoop image.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-223) Create acceptance test for using datanode plugin

2018-11-16 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16689265#comment-16689265
 ] 

Sandeep Nemuri commented on HDDS-223:
-

Thanks for the review and commit [~elek]. 

> Create acceptance test for using datanode plugin
> 
>
> Key: HDDS-223
> URL: https://issues.apache.org/jira/browse/HDDS-223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: alpha2, newbie
> Fix For: 0.4.0
>
> Attachments: HDDS-223.001.patch, HDDS-223.002.patch
>
>
> In the current docker-compose files (both in the hadoop-dist and 
> acceptance-test) we use  simplified ozone clusters: there is no namenode and 
> we use standalone hdds datanode processes.
> To test ozone/hdds as a datanode plugin we need to create separated 
> acceptance tests which uses hadoop:3.1 and hadoop:3.0 + ozone hdds datanode 
> plugin artifact



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-223) Create acceptance test for using datanode plugin

2018-11-15 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16688059#comment-16688059
 ] 

Sandeep Nemuri commented on HDDS-223:
-

Thanks for reviewing the patch [~elek],


We do run basic,ozonefs and s3 tests when executed below command.
{code:java}
./test.sh --env ozone-hdfs
{code}
Also only s3 tests with below command.

 
{code:java}
./test.sh --env ozone-hdfs s3
{code}
However since we are already running s3 tests as part of ozone/hdds it seems 
redundant running s3 tests in HDFS+ HDDS cluster. Let me know if you want me to 
remove s3 definition.

> Create acceptance test for using datanode plugin
> 
>
> Key: HDDS-223
> URL: https://issues.apache.org/jira/browse/HDDS-223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: alpha2, newbie
> Attachments: HDDS-223.001.patch, HDDS-223.002.patch
>
>
> In the current docker-compose files (both in the hadoop-dist and 
> acceptance-test) we use  simplified ozone clusters: there is no namenode and 
> we use standalone hdds datanode processes.
> To test ozone/hdds as a datanode plugin we need to create separated 
> acceptance tests which uses hadoop:3.1 and hadoop:3.0 + ozone hdds datanode 
> plugin artifact



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-827) TestStorageContainerManagerHttpServer should use dynamic port

2018-11-13 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-827:

Attachment: HDDS-827.001.patch
Status: Patch Available  (was: Open)

[~nandakumar131], Attaching the patch with changes, please review. 

> TestStorageContainerManagerHttpServer should use dynamic port
> -
>
> Key: HDDS-827
> URL: https://issues.apache.org/jira/browse/HDDS-827
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-827.001.patch
>
>
> Most of the time {{TestStorageContainerManagerHttpServer}} is failing with 
> {code}
> java.net.BindException: Port in use: 0.0.0.0:9876
> ...
> Caused by: java.net.BindException: Address already in use
> {code}
> TestStorageContainerManagerHttpServer should use a port which is free 
> (dynamic), instead of trying to bind with default 9876.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-827) TestStorageContainerManagerHttpServer should use dynamic port

2018-11-13 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-827:
---

Assignee: Sandeep Nemuri

> TestStorageContainerManagerHttpServer should use dynamic port
> -
>
> Key: HDDS-827
> URL: https://issues.apache.org/jira/browse/HDDS-827
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> Most of the time {{TestStorageContainerManagerHttpServer}} is failing with 
> {code}
> java.net.BindException: Port in use: 0.0.0.0:9876
> ...
> Caused by: java.net.BindException: Address already in use
> {code}
> TestStorageContainerManagerHttpServer should use a port which is free 
> (dynamic), instead of trying to bind with default 9876.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-223) Create acceptance test for using datanode plugin

2018-11-05 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-223:

Attachment: HDDS-223.002.patch

> Create acceptance test for using datanode plugin
> 
>
> Key: HDDS-223
> URL: https://issues.apache.org/jira/browse/HDDS-223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: alpha2, newbie
> Attachments: HDDS-223.001.patch, HDDS-223.002.patch
>
>
> In the current docker-compose files (both in the hadoop-dist and 
> acceptance-test) we use  simplified ozone clusters: there is no namenode and 
> we use standalone hdds datanode processes.
> To test ozone/hdds as a datanode plugin we need to create separated 
> acceptance tests which uses hadoop:3.1 and hadoop:3.0 + ozone hdds datanode 
> plugin artifact



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-223) Create acceptance test for using datanode plugin

2018-11-05 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676312#comment-16676312
 ] 

Sandeep Nemuri commented on HDDS-223:
-

Attaching V2 patch adding s3g to run se related tests with datanode plugin. 
[^HDDS-223.002.patch]

> Create acceptance test for using datanode plugin
> 
>
> Key: HDDS-223
> URL: https://issues.apache.org/jira/browse/HDDS-223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: alpha2, newbie
> Attachments: HDDS-223.001.patch, HDDS-223.002.patch
>
>
> In the current docker-compose files (both in the hadoop-dist and 
> acceptance-test) we use  simplified ozone clusters: there is no namenode and 
> we use standalone hdds datanode processes.
> To test ozone/hdds as a datanode plugin we need to create separated 
> acceptance tests which uses hadoop:3.1 and hadoop:3.0 + ozone hdds datanode 
> plugin artifact



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-223) Create acceptance test for using datanode plugin

2018-11-05 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-223:

Attachment: HDDS-223.001.patch
Status: Patch Available  (was: Open)

[~elek] Thanks for guiding me to address this jira. 

Attaching the v1 patch, Kindly review. 

> Create acceptance test for using datanode plugin
> 
>
> Key: HDDS-223
> URL: https://issues.apache.org/jira/browse/HDDS-223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: alpha2, newbie
> Attachments: HDDS-223.001.patch
>
>
> In the current docker-compose files (both in the hadoop-dist and 
> acceptance-test) we use  simplified ozone clusters: there is no namenode and 
> we use standalone hdds datanode processes.
> To test ozone/hdds as a datanode plugin we need to create separated 
> acceptance tests which uses hadoop:3.1 and hadoop:3.0 + ozone hdds datanode 
> plugin artifact



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-165) Add unit test for OzoneHddsDatanodeService

2018-10-09 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643816#comment-16643816
 ] 

Sandeep Nemuri commented on HDDS-165:
-

Hi [~arpitagarwal], Yes, will post a patch by end of this week. 

> Add unit test for OzoneHddsDatanodeService
> --
>
> Key: HDDS-165
> URL: https://issues.apache.org/jira/browse/HDDS-165
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.3.0
>
>
> We have to add unit-test for {{OzoneHddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-599) Fix TestOzoneConfiguration TestOzoneConfigurationFields

2018-10-09 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-599:

Attachment: HDDS-599.001.patch
Status: Patch Available  (was: Open)

Attaching patch with required changes. Pls review. 

> Fix TestOzoneConfiguration TestOzoneConfigurationFields
> ---
>
> Key: HDDS-599
> URL: https://issues.apache.org/jira/browse/HDDS-599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-599.001.patch
>
>
> java.lang.AssertionError: ozone-default.xml has 2 properties missing in class 
> org.apache.hadoop.ozone.OzoneConfigKeys class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys class 
> org.apache.hadoop.ozone.om.OMConfigKeys class 
> org.apache.hadoop.hdds.HddsConfigKeys class 
> org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries: 
> hdds.lock.suppress.warning.interval.ms hdds.write.lock.reporting.threshold.ms 
> expected:<0> but was:<2>
>  
> hdds.lock.suppress.warning.interval.ms and 
> hdds.write.lock.reporting.threshold.ms should be removed from 
> ozone-default.xml 
> This is caused by HDDS-354, which has missed removing these properties from 
> ozone-default.xml



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-599) Fix TestOzoneConfiguration TestOzoneConfigurationFields

2018-10-09 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-599:

Description: 
java.lang.AssertionError: ozone-default.xml has 2 properties missing in class 
org.apache.hadoop.ozone.OzoneConfigKeys class 
org.apache.hadoop.hdds.scm.ScmConfigKeys class 
org.apache.hadoop.ozone.om.OMConfigKeys class 
org.apache.hadoop.hdds.HddsConfigKeys class 
org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries: 
hdds.lock.suppress.warning.interval.ms hdds.write.lock.reporting.threshold.ms 
expected:<0> but was:<2>

 

hdds.lock.suppress.warning.interval.ms and 
hdds.write.lock.reporting.threshold.ms should be removed from ozone-default.xml 

This is caused by HDDS-354, which has missed removing these properties from 
ozone-default.xml

  was:
java.lang.AssertionError: ozone-default.xml has 2 properties missing in class 
org.apache.hadoop.ozone.OzoneConfigKeys class 
org.apache.hadoop.hdds.scm.ScmConfigKeys class 
org.apache.hadoop.ozone.om.OMConfigKeys class 
org.apache.hadoop.hdds.HddsConfigKeys class 
org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries: 
hdds.lock.suppress.warning.interval.ms hdds.write.lock.reporting.threshold.ms 
expected:<0> but was:<2>

 

hdds.lock.suppress.warning.interval.ms and 
hdds.write.lock.reporting.threshold.ms should be removed from ozone-default.xml 

This is caused by HDDS-354, which has missed removing these properties from 
ozone-default.xml

 

 

 

 

 


> Fix TestOzoneConfiguration TestOzoneConfigurationFields
> ---
>
> Key: HDDS-599
> URL: https://issues.apache.org/jira/browse/HDDS-599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> java.lang.AssertionError: ozone-default.xml has 2 properties missing in class 
> org.apache.hadoop.ozone.OzoneConfigKeys class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys class 
> org.apache.hadoop.ozone.om.OMConfigKeys class 
> org.apache.hadoop.hdds.HddsConfigKeys class 
> org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries: 
> hdds.lock.suppress.warning.interval.ms hdds.write.lock.reporting.threshold.ms 
> expected:<0> but was:<2>
>  
> hdds.lock.suppress.warning.interval.ms and 
> hdds.write.lock.reporting.threshold.ms should be removed from 
> ozone-default.xml 
> This is caused by HDDS-354, which has missed removing these properties from 
> ozone-default.xml



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-599) Fix TestOzoneConfiguration TestOzoneConfigurationFields

2018-10-09 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-599:
---

Assignee: Sandeep Nemuri

> Fix TestOzoneConfiguration TestOzoneConfigurationFields
> ---
>
> Key: HDDS-599
> URL: https://issues.apache.org/jira/browse/HDDS-599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> java.lang.AssertionError: ozone-default.xml has 2 properties missing in class 
> org.apache.hadoop.ozone.OzoneConfigKeys class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys class 
> org.apache.hadoop.ozone.om.OMConfigKeys class 
> org.apache.hadoop.hdds.HddsConfigKeys class 
> org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries: 
> hdds.lock.suppress.warning.interval.ms hdds.write.lock.reporting.threshold.ms 
> expected:<0> but was:<2>
>  
> hdds.lock.suppress.warning.interval.ms and 
> hdds.write.lock.reporting.threshold.ms should be removed from 
> ozone-default.xml 
> This is caused by HDDS-354, which has missed removing these properties from 
> ozone-default.xml
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-586) Incorrect url's in Ozone website

2018-10-08 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642307#comment-16642307
 ] 

Sandeep Nemuri commented on HDDS-586:
-

Attaching the patch with necessary changes. Kindly review. 

> Incorrect url's in Ozone website
> 
>
> Key: HDDS-586
> URL: https://issues.apache.org/jira/browse/HDDS-586
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: website
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-586.001.patch, image-2018-10-08-22-21-14-025.png
>
>
> Below section in webesite 
>  !image-2018-10-08-22-21-14-025.png! 
> is pointing to incorrect url's 
> https://hadoop.apache.org/downloads/ and 
> https://hadoop.apache.org/docs/latest/runningviadocker.html 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-586) Incorrect url's in Ozone website

2018-10-08 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-586:

Status: Patch Available  (was: Open)

> Incorrect url's in Ozone website
> 
>
> Key: HDDS-586
> URL: https://issues.apache.org/jira/browse/HDDS-586
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: website
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-586.001.patch, image-2018-10-08-22-21-14-025.png
>
>
> Below section in webesite 
>  !image-2018-10-08-22-21-14-025.png! 
> is pointing to incorrect url's 
> https://hadoop.apache.org/downloads/ and 
> https://hadoop.apache.org/docs/latest/runningviadocker.html 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-586) Incorrect url's in Ozone website

2018-10-08 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-586:

Attachment: HDDS-586.001.patch

> Incorrect url's in Ozone website
> 
>
> Key: HDDS-586
> URL: https://issues.apache.org/jira/browse/HDDS-586
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: website
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-586.001.patch, image-2018-10-08-22-21-14-025.png
>
>
> Below section in webesite 
>  !image-2018-10-08-22-21-14-025.png! 
> is pointing to incorrect url's 
> https://hadoop.apache.org/downloads/ and 
> https://hadoop.apache.org/docs/latest/runningviadocker.html 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-586) Incorrect url's in Ozone website

2018-10-08 Thread Sandeep Nemuri (JIRA)
Sandeep Nemuri created HDDS-586:
---

 Summary: Incorrect url's in Ozone website
 Key: HDDS-586
 URL: https://issues.apache.org/jira/browse/HDDS-586
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: website
Reporter: Sandeep Nemuri
Assignee: Sandeep Nemuri
 Attachments: image-2018-10-08-22-21-14-025.png

Below section in webesite 

 !image-2018-10-08-22-21-14-025.png! 

is pointing to incorrect url's 
https://hadoop.apache.org/downloads/ and 
https://hadoop.apache.org/docs/latest/runningviadocker.html 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-443) Create reusable ProgressBar utility for freon tests

2018-10-04 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-443:
---

Assignee: (was: Sandeep Nemuri)

> Create reusable ProgressBar utility for freon tests
> ---
>
> Key: HDDS-443
> URL: https://issues.apache.org/jira/browse/HDDS-443
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Priority: Major
>  Labels: newbie
>
> Since HDDS-398 we can support multiple type of freon tests. But to add more 
> test we need common utilities for generic task.
> One of the most important is to provide a reusable Progressbar utility.
> Currently the ProgressBar class is part the RandomKeyGenerator. It should be 
> moved out from the class and all the thread start/stop logic should be moved 
> to the ProgressBar.
> {{ProgressBar bar = new ProgressBar(System.out, () ->  ... , 200);}}
> {{bar.start(); // thred should be started here}}{{bar.stop(); // thread 
> should be stopped.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-443) Create reusable ProgressBar utility for freon tests

2018-09-30 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16633368#comment-16633368
 ] 

Sandeep Nemuri commented on HDDS-443:
-

[~horzsolt2006], Sure please attach the patch to jira. 

> Create reusable ProgressBar utility for freon tests
> ---
>
> Key: HDDS-443
> URL: https://issues.apache.org/jira/browse/HDDS-443
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> Since HDDS-398 we can support multiple type of freon tests. But to add more 
> test we need common utilities for generic task.
> One of the most important is to provide a reusable Progressbar utility.
> Currently the ProgressBar class is part the RandomKeyGenerator. It should be 
> moved out from the class and all the thread start/stop logic should be moved 
> to the ProgressBar.
> {{ProgressBar bar = new ProgressBar(System.out, () ->  ... , 200);}}
> {{bar.start(); // thred should be started here}}{{bar.stop(); // thread 
> should be stopped.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-443) Create reusable ProgressBar utility for freon tests

2018-09-28 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-443:
---

Assignee: Sandeep Nemuri

> Create reusable ProgressBar utility for freon tests
> ---
>
> Key: HDDS-443
> URL: https://issues.apache.org/jira/browse/HDDS-443
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> Since HDDS-398 we can support multiple type of freon tests. But to add more 
> test we need common utilities for generic task.
> One of the most important is to provide a reusable Progressbar utility.
> Currently the ProgressBar class is part the RandomKeyGenerator. It should be 
> moved out from the class and all the thread start/stop logic should be moved 
> to the ProgressBar.
> {{ProgressBar bar = new ProgressBar(System.out, () ->  ... , 200);}}
> {{bar.start(); // thred should be started here}}{{bar.stop(); // thread 
> should be stopped.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-445) Create a logger to print out all of the incoming requests

2018-09-28 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-445:

Environment: (was: For the Http servier of HDDS-444 we need an option 
to print out all the HttpRequests (header + body).

To create a 100% s3 compatible interface, we need to test it with multiple 
external tools (such as s3cli). While mitmproxy is always our best friend, to 
make it more easier to identify the problems we need a method to log all the 
incoming requests with a logger which could be turned on.

Most probably we already have such kind of filter in hadoop/jetty the only 
thing what we need is to configure it.)

> Create a logger to print out all of the incoming requests
> -
>
> Key: HDDS-445
> URL: https://issues.apache.org/jira/browse/HDDS-445
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-445) Create a logger to print out all of the incoming requests

2018-09-28 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-445:

Description: 
For the Http servier of HDDS-444 we need an option to print out all the 
HttpRequests (header + body).

To create a 100% s3 compatible interface, we need to test it with multiple 
external tools (such as s3cli). While mitmproxy is always our best friend, to 
make it more easier to identify the problems we need a method to log all the 
incoming requests with a logger which could be turned on.

Most probably we already have such kind of filter in hadoop/jetty the only 
thing what we need is to configure it.

> Create a logger to print out all of the incoming requests
> -
>
> Key: HDDS-445
> URL: https://issues.apache.org/jira/browse/HDDS-445
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Priority: Major
>
> For the Http servier of HDDS-444 we need an option to print out all the 
> HttpRequests (header + body).
> To create a 100% s3 compatible interface, we need to test it with multiple 
> external tools (such as s3cli). While mitmproxy is always our best friend, to 
> make it more easier to identify the problems we need a method to log all the 
> incoming requests with a logger which could be turned on.
> Most probably we already have such kind of filter in hadoop/jetty the only 
> thing what we need is to configure it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-223) Create acceptance test for using datanode plugin

2018-09-23 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16625140#comment-16625140
 ] 

Sandeep Nemuri commented on HDDS-223:
-

[~elek] This requirement seems to be addressed by HDDS-446, Do let me know if 
we can mark this Jira as duplicate.

> Create acceptance test for using datanode plugin
> 
>
> Key: HDDS-223
> URL: https://issues.apache.org/jira/browse/HDDS-223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: alpha2, newbie
> Fix For: 0.3.0
>
>
> In the current docker-compose files (both in the hadoop-dist and 
> acceptance-test) we use  simplified ozone clusters: there is no namenode and 
> we use standalone hdds datanode processes.
> To test ozone/hdds as a datanode plugin we need to create separated 
> acceptance tests which uses hadoop:3.1 and hadoop:3.0 + ozone hdds datanode 
> plugin artifact



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-221) Create acceptance test to test ./start-ozone.sh and ./stop-ozone.sh for ozone/hdds

2018-09-23 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16625119#comment-16625119
 ] 

Sandeep Nemuri commented on HDDS-221:
-

[~anu], Even the new tests are starting the daemons individually.

Working on this Jira. Will update the patch soon.

> Create acceptance test to test ./start-ozone.sh and ./stop-ozone.sh for 
> ozone/hdds
> --
>
> Key: HDDS-221
> URL: https://issues.apache.org/jira/browse/HDDS-221
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
>
> Usually use the 'ozone' shell command to test our ozone/hdds cluster.
> We need to create different acceptance test compose files to test the 
> ./start-all.sh and ./hadoop-daemon.sh functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-190) Improve shell error message for unrecognized option

2018-09-08 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607930#comment-16607930
 ] 

Sandeep Nemuri commented on HDDS-190:
-

Thanks [~elek] and [~dineshchitlangia] for jumping in and providing the fix :)

> Improve shell error message for unrecognized option
> ---
>
> Key: HDDS-190
> URL: https://issues.apache.org/jira/browse/HDDS-190
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-190.001.patch, HDDS-190.002.patch
>
>
> The error message with an unrecognized option is unfriendly. E.g.
> {code}
> $ ozone oz -badOption
> Unrecognized option: -badOptionERROR: null
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-190) Improve shell error message for unrecognized option

2018-09-07 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606840#comment-16606840
 ] 

Sandeep Nemuri commented on HDDS-190:
-

[~jnp], Working on this...Will provide a patch shortly.

> Improve shell error message for unrecognized option
> ---
>
> Key: HDDS-190
> URL: https://issues.apache.org/jira/browse/HDDS-190
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-190.001.patch
>
>
> The error message with an unrecognized option is unfriendly. E.g.
> {code}
> $ ozone oz -badOption
> Unrecognized option: -badOptionERROR: null
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-190) Improve shell error message for unrecognized option

2018-08-27 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593594#comment-16593594
 ] 

Sandeep Nemuri commented on HDDS-190:
-

Hi [~elek], Haven't started working on this. Feel free to grab.

> Improve shell error message for unrecognized option
> ---
>
> Key: HDDS-190
> URL: https://issues.apache.org/jira/browse/HDDS-190
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> The error message with an unrecognized option is unfriendly. E.g.
> {code}
> $ ozone oz -badOption
> Unrecognized option: -badOptionERROR: null
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-243) Update docs to reflect the new name for Ozone Manager

2018-08-10 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16576054#comment-16576054
 ] 

Sandeep Nemuri commented on HDDS-243:
-

[~dpapp], Thanks for reporting. However, This Jira is to track the apache docs. 

[~elek] will be updating the tutorial accordingly. 

> Update docs to reflect the new name for Ozone Manager
> -
>
> Key: HDDS-243
> URL: https://issues.apache.org/jira/browse/HDDS-243
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> KSM has been renamed to Ozone Manager. The docs still refer to KSM. This JIRA 
> is to track that docs issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-219) Genearate version-info.properties for hadoop and ozone

2018-08-08 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16573207#comment-16573207
 ] 

Sandeep Nemuri commented on HDDS-219:
-

[^HDDS-219.002.patch] Includes : 
 * Ozone logo in version output.
 * Checksum for all the sources in the hdds and ozone project.
 * Release name

Output is as below:
{code:java}
~/g/hadoop> ./hadoop-dist/target/ozone/bin/ozone version
  //
   
 
   //  
  /    /
 /   ///
    /
/ 
/      //
  ///   /
 /  /// 
  /   //  /
   //   //   /
 /// 
   //  
   ///   //
  /0.2.1-SNAPSHOT(Acadia)

Source code repository https://github.com/apache/hadoop.git -r 
6ed8593d180fe653f78f0a210478555338c4685a
Compiled by snemuri on 2018-08-08T10:45Z
Compiled with protoc 2.5.0
>From source with checksum 5e911cd6412a9c54b463d01211cbae6d

Using HDDS 0.2.1-SNAPSHOT
Source code repository https://github.com/apache/hadoop.git -r 
6ed8593d180fe653f78f0a210478555338c4685a
Compiled by snemuri on 2018-08-08T10:41Z
Compiled with protoc 2.5.0
>From source with checksum 6cedbc75c9cd6791685b499cf62a2d

{code}

[~elek], Kindly review.

> Genearate version-info.properties for hadoop and ozone
> --
>
> Key: HDDS-219
> URL: https://issues.apache.org/jira/browse/HDDS-219
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-219.001.patch, HDDS-219.002.patch
>
>
> org.apache.hadoop.util.VersionInfo provides an api to show the actual version 
> information.
> We need to generate hdds-version-info.properties and 
> ozone-version-info.properties as part of the build process(most probably in 
> hdds/common, ozone/common projects)  and print out the available versions in 
> case of 'ozone version' command



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-219) Genearate version-info.properties for hadoop and ozone

2018-08-08 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-219:

Status: Patch Available  (was: In Progress)

> Genearate version-info.properties for hadoop and ozone
> --
>
> Key: HDDS-219
> URL: https://issues.apache.org/jira/browse/HDDS-219
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-219.001.patch, HDDS-219.002.patch
>
>
> org.apache.hadoop.util.VersionInfo provides an api to show the actual version 
> information.
> We need to generate hdds-version-info.properties and 
> ozone-version-info.properties as part of the build process(most probably in 
> hdds/common, ozone/common projects)  and print out the available versions in 
> case of 'ozone version' command



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-219) Genearate version-info.properties for hadoop and ozone

2018-08-08 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-219:

Attachment: HDDS-219.002.patch

> Genearate version-info.properties for hadoop and ozone
> --
>
> Key: HDDS-219
> URL: https://issues.apache.org/jira/browse/HDDS-219
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-219.001.patch, HDDS-219.002.patch
>
>
> org.apache.hadoop.util.VersionInfo provides an api to show the actual version 
> information.
> We need to generate hdds-version-info.properties and 
> ozone-version-info.properties as part of the build process(most probably in 
> hdds/common, ozone/common projects)  and print out the available versions in 
> case of 'ozone version' command



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-219) Genearate version-info.properties for hadoop and ozone

2018-07-27 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-219:

Status: In Progress  (was: Patch Available)

> Genearate version-info.properties for hadoop and ozone
> --
>
> Key: HDDS-219
> URL: https://issues.apache.org/jira/browse/HDDS-219
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-219.001.patch
>
>
> org.apache.hadoop.util.VersionInfo provides an api to show the actual version 
> information.
> We need to generate hdds-version-info.properties and 
> ozone-version-info.properties as part of the build process(most probably in 
> hdds/common, ozone/common projects)  and print out the available versions in 
> case of 'ozone version' command



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-219) Genearate version-info.properties for hadoop and ozone

2018-07-27 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559725#comment-16559725
 ] 

Sandeep Nemuri commented on HDDS-219:
-

[~anu], I will be updating the v2 patch soon. 

 
 

> Genearate version-info.properties for hadoop and ozone
> --
>
> Key: HDDS-219
> URL: https://issues.apache.org/jira/browse/HDDS-219
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-219.001.patch
>
>
> org.apache.hadoop.util.VersionInfo provides an api to show the actual version 
> information.
> We need to generate hdds-version-info.properties and 
> ozone-version-info.properties as part of the build process(most probably in 
> hdds/common, ozone/common projects)  and print out the available versions in 
> case of 'ozone version' command



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-201) Add name for LeaseManager

2018-07-26 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-201:

Attachment: HDDS-201.002.patch

> Add name for LeaseManager
> -
>
> Key: HDDS-201
> URL: https://issues.apache.org/jira/browse/HDDS-201
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-201.001.patch, HDDS-201.002.patch
>
>
> During the review of HDDS-195 we realised that one server could have multiple 
> LeaseManagers (for example one for the watchers one for the container 
> creation).
> To make it easier to monitor it would be good to use some specific names for 
> the release manager.
> This jira is about adding a new field (name) to the release manager which 
> should be defined by a constructor parameter and should be required.
> It should be used in the name of the Threads and all the log message 
> (Something like "Starting CommandWatcher LeasManager")



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-201) Add name for LeaseManager

2018-07-26 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558111#comment-16558111
 ] 

Sandeep Nemuri commented on HDDS-201:
-

Thanks for reviewing the changes [~elek]. 

Attaching [^HDDS-201.002.patch] addressing your comments.

> Add name for LeaseManager
> -
>
> Key: HDDS-201
> URL: https://issues.apache.org/jira/browse/HDDS-201
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-201.001.patch, HDDS-201.002.patch
>
>
> During the review of HDDS-195 we realised that one server could have multiple 
> LeaseManagers (for example one for the watchers one for the container 
> creation).
> To make it easier to monitor it would be good to use some specific names for 
> the release manager.
> This jira is about adding a new field (name) to the release manager which 
> should be defined by a constructor parameter and should be required.
> It should be used in the name of the Threads and all the log message 
> (Something like "Starting CommandWatcher LeasManager")



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-165) Add unit test for OzoneHddsDatanodeService

2018-07-26 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-165:
---

Assignee: Sandeep Nemuri

> Add unit test for OzoneHddsDatanodeService
> --
>
> Key: HDDS-165
> URL: https://issues.apache.org/jira/browse/HDDS-165
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
>
> We have to add unit-test for {{OzoneHddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-201) Add name for LeaseManager

2018-07-25 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-201:

Attachment: HDDS-201.001.patch

> Add name for LeaseManager
> -
>
> Key: HDDS-201
> URL: https://issues.apache.org/jira/browse/HDDS-201
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-201.001.patch
>
>
> During the review of HDDS-195 we realised that one server could have multiple 
> LeaseManagers (for example one for the watchers one for the container 
> creation).
> To make it easier to monitor it would be good to use some specific names for 
> the release manager.
> This jira is about adding a new field (name) to the release manager which 
> should be defined by a constructor parameter and should be required.
> It should be used in the name of the Threads and all the log message 
> (Something like "Starting CommandWatcher LeasManager")



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-201) Add name for LeaseManager

2018-07-25 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16555660#comment-16555660
 ] 

Sandeep Nemuri commented on HDDS-201:
-

[~elek], Attaching the v1 patch. Kindly review.

> Add name for LeaseManager
> -
>
> Key: HDDS-201
> URL: https://issues.apache.org/jira/browse/HDDS-201
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-201.001.patch
>
>
> During the review of HDDS-195 we realised that one server could have multiple 
> LeaseManagers (for example one for the watchers one for the container 
> creation).
> To make it easier to monitor it would be good to use some specific names for 
> the release manager.
> This jira is about adding a new field (name) to the release manager which 
> should be defined by a constructor parameter and should be required.
> It should be used in the name of the Threads and all the log message 
> (Something like "Starting CommandWatcher LeasManager")



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-223) Create acceptance test for using datanode plugin

2018-07-25 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-223:
---

Assignee: Sandeep Nemuri

> Create acceptance test for using datanode plugin
> 
>
> Key: HDDS-223
> URL: https://issues.apache.org/jira/browse/HDDS-223
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> In the current docker-compose files (both in the hadoop-dist and 
> acceptance-test) we use  simplified ozone clusters: there is no namenode and 
> we use standalone hdds datanode processes.
> To test ozone/hdds as a datanode plugin we need to create separated 
> acceptance tests which uses hadoop:3.1 and hadoop:3.0 + ozone hdds datanode 
> plugin artifact



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-219) Genearate version-info.properties for hadoop and ozone

2018-07-24 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554650#comment-16554650
 ] 

Sandeep Nemuri commented on HDDS-219:
-

[~elek], Attaching the v1 patch to print Ozone and HDDS versions.  
[^HDDS-219.001.patch]

Below is the output of `ozone version` command.

 
{code:java}
~/g/hadoop> ./ozone version
Ozone 0.2.1-SNAPSHOT
Source code repository https://github.com/apache/hadoop.git -r 
ff7c2eda34c2c40ad71b50df6462a661bd213fbd
Compiled by snemuri on 2018-07-24T17:42Z
Compiled with protoc 2.5.0
>From source with checksum 61bff39be6caa0bb8b3e31be770bf
This command was run using 
/Users/snemuri/git/hadoop/hadoop-dist/target/ozone/share/hadoop/ozone/hadoop-ozone-common-0.2.1-SNAPSHOT.jar
HDDS 0.2.1-SNAPSHOT
Source code repository https://github.com/apache/hadoop.git -r 
ff7c2eda34c2c40ad71b50df6462a661bd213fbd
Compiled by snemuri on 2018-07-24T17:42Z
Compiled with protoc 2.5.0
>From source with checksum 25fc3311f5435a42856b49979728fb
This command was run using 
/Users/snemuri/git/hadoop/hadoop-dist/target/ozone/share/hadoop/hdds/hadoop-hdds-common-0.2.1-SNAPSHOT.jar
 
{code}
Added Ozone and HDDFS versions to the same command as HDDS doesn't have any 
cli. 

We could do some cosmetic changes to the output, Something like spark version.

 
{code:java}
spark-submit --version
SPARK_MAJOR_VERSION is set to 2, using Spark2
Welcome to
  __
 / __/__ ___ _/ /__
 _\ \/ _ \/ _ `/ __/ '_/
 /___/ .__/\_,_/_/ /_/\_\ version 2.2.0.2.6.4.0-91
 /_/
Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_112
Branch HEAD
Compiled by user jenkins on 2018-01-04T10:31:44Z
Revision a24017869f5450397136ee8b11be818e7cd3facb
Url g...@github.com:hortonworks/spark2.git 
{code}
Let me know your thoughts on this. 

> Genearate version-info.properties for hadoop and ozone
> --
>
> Key: HDDS-219
> URL: https://issues.apache.org/jira/browse/HDDS-219
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-219.001.patch
>
>
> org.apache.hadoop.util.VersionInfo provides an api to show the actual version 
> information.
> We need to generate hdds-version-info.properties and 
> ozone-version-info.properties as part of the build process(most probably in 
> hdds/common, ozone/common projects)  and print out the available versions in 
> case of 'ozone version' command



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-219) Genearate version-info.properties for hadoop and ozone

2018-07-24 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-219:

Attachment: HDDS-219.001.patch

> Genearate version-info.properties for hadoop and ozone
> --
>
> Key: HDDS-219
> URL: https://issues.apache.org/jira/browse/HDDS-219
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-219.001.patch
>
>
> org.apache.hadoop.util.VersionInfo provides an api to show the actual version 
> information.
> We need to generate hdds-version-info.properties and 
> ozone-version-info.properties as part of the build process(most probably in 
> hdds/common, ozone/common projects)  and print out the available versions in 
> case of 'ozone version' command



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-243) Update docs to reflect the new name for Ozone Manager

2018-07-24 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554345#comment-16554345
 ] 

Sandeep Nemuri commented on HDDS-243:
-

[~anu], I don't see any reference for KSM in the codebase. Could you please 
point me to the docs where we have KSM? 

> Update docs to reflect the new name for Ozone Manager
> -
>
> Key: HDDS-243
> URL: https://issues.apache.org/jira/browse/HDDS-243
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> KSM has been renamed to Ozone Manager. The docs still refer to KSM. This JIRA 
> is to track that docs issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549676#comment-16549676
 ] 

Sandeep Nemuri commented on HDDS-264:
-

Fixed the typo. Please review. 

> 'oz' subcommand reference is not present in 'ozone' command help
> 
>
> Key: HDDS-264
> URL: https://issues.apache.org/jira/browse/HDDS-264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Sandeep Nemuri
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-264.001.patch
>
>
> 'oz' subcommand is not present in ozone help.
>  
> ozone help:
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone
> Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
> OPTIONS is none or any of:
> --buildpaths attempt to add class files from build tree
> --config dir Hadoop config directory
> --daemon (start|status|stop) operate on a daemon
> --debug turn on shell script debug mode
> --help usage information
> --hostnames list[,of,host,names] hosts to use in worker mode
> --hosts filename list of hosts to use in worker mode
> --loglevel level set the log4j level for this command
> --workers turn on worker mode
> SUBCOMMAND is one of:
>  Admin Commands:
> jmxget get JMX exported values from NameNode or DataNode.
> Client Commands:
> classpath prints the class path needed to get the hadoop jar and the
>  required libraries
> envvars display computed Hadoop environment variables
> freon runs an ozone data generator
> genconf generate minimally required ozone configs and output to
>  ozone-site.xml in specified path
> genesis runs a collection of ozone benchmarks to help with tuning.
> getozoneconf get ozone config values from configuration
> noz ozone debug tool, convert ozone metadata into relational data
> o3 command line interface for ozone
> scmcli run the CLI of the Storage Container Manager
> version print the version
> Daemon Commands:
> datanode run a HDDS datanode
> om Ozone Manager
> scm run the Storage Container Manager service
> SUBCOMMAND may print help when invoked w/o parameters or with -h.
> {noformat}
>  
> 'oz' subcommand example :
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone oz -listVolume /
> 2018-07-19 14:51:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-01597",
>  "createdOn" : "Sat, 20 Feb +50517 10:11:35 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-19478",
>  "createdOn" : "Thu, 03 Jun +50517 22:23:12 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  }
>  
> {noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-264:

Attachment: HDDS-264.001.patch

> 'oz' subcommand reference is not present in 'ozone' command help
> 
>
> Key: HDDS-264
> URL: https://issues.apache.org/jira/browse/HDDS-264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Sandeep Nemuri
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-264.001.patch
>
>
> 'oz' subcommand is not present in ozone help.
>  
> ozone help:
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone
> Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
> OPTIONS is none or any of:
> --buildpaths attempt to add class files from build tree
> --config dir Hadoop config directory
> --daemon (start|status|stop) operate on a daemon
> --debug turn on shell script debug mode
> --help usage information
> --hostnames list[,of,host,names] hosts to use in worker mode
> --hosts filename list of hosts to use in worker mode
> --loglevel level set the log4j level for this command
> --workers turn on worker mode
> SUBCOMMAND is one of:
>  Admin Commands:
> jmxget get JMX exported values from NameNode or DataNode.
> Client Commands:
> classpath prints the class path needed to get the hadoop jar and the
>  required libraries
> envvars display computed Hadoop environment variables
> freon runs an ozone data generator
> genconf generate minimally required ozone configs and output to
>  ozone-site.xml in specified path
> genesis runs a collection of ozone benchmarks to help with tuning.
> getozoneconf get ozone config values from configuration
> noz ozone debug tool, convert ozone metadata into relational data
> o3 command line interface for ozone
> scmcli run the CLI of the Storage Container Manager
> version print the version
> Daemon Commands:
> datanode run a HDDS datanode
> om Ozone Manager
> scm run the Storage Container Manager service
> SUBCOMMAND may print help when invoked w/o parameters or with -h.
> {noformat}
>  
> 'oz' subcommand example :
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone oz -listVolume /
> 2018-07-19 14:51:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-01597",
>  "createdOn" : "Sat, 20 Feb +50517 10:11:35 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-19478",
>  "createdOn" : "Thu, 03 Jun +50517 22:23:12 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  }
>  
> {noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-19 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-216:

Attachment: HDDS-216.002.patch

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-19 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549673#comment-16549673
 ] 

Sandeep Nemuri commented on HDDS-216:
-

Thanks for the review [~bharatviswa] and [~nandakumar131].

attaching v2 patch addressing the comments. 

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-264:
---

Assignee: Sandeep Nemuri

> 'oz' subcommand reference is not present in 'ozone' command help
> 
>
> Key: HDDS-264
> URL: https://issues.apache.org/jira/browse/HDDS-264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Sandeep Nemuri
>Priority: Minor
> Fix For: 0.2.1
>
>
> 'oz' subcommand is not present in ozone help.
>  
> ozone help:
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone
> Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
> OPTIONS is none or any of:
> --buildpaths attempt to add class files from build tree
> --config dir Hadoop config directory
> --daemon (start|status|stop) operate on a daemon
> --debug turn on shell script debug mode
> --help usage information
> --hostnames list[,of,host,names] hosts to use in worker mode
> --hosts filename list of hosts to use in worker mode
> --loglevel level set the log4j level for this command
> --workers turn on worker mode
> SUBCOMMAND is one of:
>  Admin Commands:
> jmxget get JMX exported values from NameNode or DataNode.
> Client Commands:
> classpath prints the class path needed to get the hadoop jar and the
>  required libraries
> envvars display computed Hadoop environment variables
> freon runs an ozone data generator
> genconf generate minimally required ozone configs and output to
>  ozone-site.xml in specified path
> genesis runs a collection of ozone benchmarks to help with tuning.
> getozoneconf get ozone config values from configuration
> noz ozone debug tool, convert ozone metadata into relational data
> o3 command line interface for ozone
> scmcli run the CLI of the Storage Container Manager
> version print the version
> Daemon Commands:
> datanode run a HDDS datanode
> om Ozone Manager
> scm run the Storage Container Manager service
> SUBCOMMAND may print help when invoked w/o parameters or with -h.
> {noformat}
>  
> 'oz' subcommand example :
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone oz -listVolume /
> 2018-07-19 14:51:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-01597",
>  "createdOn" : "Sat, 20 Feb +50517 10:11:35 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-19478",
>  "createdOn" : "Thu, 03 Jun +50517 22:23:12 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  }
>  
> {noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-221) Create acceptance test to test ./start-all.sh for ozone/hdds

2018-07-15 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-221:
---

Assignee: Sandeep Nemuri

> Create acceptance test to test ./start-all.sh for ozone/hdds
> 
>
> Key: HDDS-221
> URL: https://issues.apache.org/jira/browse/HDDS-221
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
> Fix For: 0.2.1
>
>
> Usually use the 'ozone' shell command to test our ozone/hdds cluster.
> We need to create different acceptance test compose files to test the 
> ./start-all.sh and ./hadoop-daemon.sh functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-219) Genearate version-info.properties for hadoop and ozone

2018-07-15 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-219:
---

Assignee: Sandeep Nemuri

> Genearate version-info.properties for hadoop and ozone
> --
>
> Key: HDDS-219
> URL: https://issues.apache.org/jira/browse/HDDS-219
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> org.apache.hadoop.util.VersionInfo provides an api to show the actual version 
> information.
> We need to generate hdds-version-info.properties and 
> ozone-version-info.properties as part of the build process(most probably in 
> hdds/common, ozone/common projects)  and print out the available versions in 
> case of 'ozone version' command



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-243) Update docs to reflect the new name for Ozone Manager

2018-07-15 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-243:
---

Assignee: Sandeep Nemuri

> Update docs to reflect the new name for Ozone Manager
> -
>
> Key: HDDS-243
> URL: https://issues.apache.org/jira/browse/HDDS-243
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> KSM has been renamed to Ozone Manager. The docs still refer to KSM. This JIRA 
> is to track that docs issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-15 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16544545#comment-16544545
 ] 

Sandeep Nemuri commented on HDDS-216:
-

[~nandakumar131],  [^HDDS-216.001.patch] has the above mentioned changes. 
Please review. 

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
> Attachments: HDDS-216.001.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-15 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-216:

Attachment: HDDS-216.001.patch

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
> Attachments: HDDS-216.001.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-255) TestOzoneConfigurationFields is failing as it's not able to find hdds.command.status.report.interval in config classes

2018-07-15 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16544529#comment-16544529
 ] 

Sandeep Nemuri commented on HDDS-255:
-

[~nandakumar131], Attaching the patch with necessary changes. Please review. 

> TestOzoneConfigurationFields is failing as it's not able to find 
> hdds.command.status.report.interval in config classes 
> ---
>
> Key: HDDS-255
> URL: https://issues.apache.org/jira/browse/HDDS-255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-255.001.patch
>
>
> {{TestOzoneConfigurationFields}} is failing with the below error
> {noformat}
> TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
>  
> ozone-default.xml has 1 properties missing in  class 
> org.apache.hadoop.ozone.OzoneConfigKeys  
> class org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys 
> Entries:   hdds.command.status.report.interval expected:<0> but was:<1>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-255) TestOzoneConfigurationFields is failing as it's not able to find hdds.command.status.report.interval in config classes

2018-07-15 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-255:

Attachment: HDDS-255.001.patch

> TestOzoneConfigurationFields is failing as it's not able to find 
> hdds.command.status.report.interval in config classes 
> ---
>
> Key: HDDS-255
> URL: https://issues.apache.org/jira/browse/HDDS-255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-255.001.patch
>
>
> {{TestOzoneConfigurationFields}} is failing with the below error
> {noformat}
> TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
>  
> ozone-default.xml has 1 properties missing in  class 
> org.apache.hadoop.ozone.OzoneConfigKeys  
> class org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys 
> Entries:   hdds.command.status.report.interval expected:<0> but was:<1>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-255) TestOzoneConfigurationFields is failing as it's not able to find hdds.command.status.report.interval in config classes

2018-07-13 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-255:
---

Assignee: Sandeep Nemuri

> TestOzoneConfigurationFields is failing as it's not able to find 
> hdds.command.status.report.interval in config classes 
> ---
>
> Key: HDDS-255
> URL: https://issues.apache.org/jira/browse/HDDS-255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
>
> {{TestOzoneConfigurationFields}} is failing with the below error
> {noformat}
> TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
>  ozone-default.xml has 1 properties missing in  class 
> org.apache.hadoop.ozone.OzoneConfigKeys  class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys Entries:   
> hdds.command.status.report.interval expected:<0> but was:<1>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin

2018-06-27 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524804#comment-16524804
 ] 

Sandeep Nemuri commented on HDDS-94:


Thanks for fixing the config [~elek]. I had some challenges executing 
acceptance test in my Mac.

> Change ozone datanode command to start the standalone datanode plugin
> -
>
> Key: HDDS-94
> URL: https://issues.apache.org/jira/browse/HDDS-94
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, 
> HDDS-94.004.patch, HDDS-94.005.patch
>
>
> The current ozone datanode command starts the regular hdfs datanode with an 
> enabled HddsDatanodeService as a datanode plugin.
> The goal is to start only the HddsDatanodeService.java (main function is 
> already there but GenericOptionParser should be adopted). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin

2018-06-25 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522732#comment-16522732
 ] 

Sandeep Nemuri commented on HDDS-94:


[^HDDS-94.004.patch] updated changes for new docker files (HDDS-177)

> Change ozone datanode command to start the standalone datanode plugin
> -
>
> Key: HDDS-94
> URL: https://issues.apache.org/jira/browse/HDDS-94
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, 
> HDDS-94.004.patch
>
>
> The current ozone datanode command starts the regular hdfs datanode with an 
> enabled HddsDatanodeService as a datanode plugin.
> The goal is to start only the HddsDatanodeService.java (main function is 
> already there but GenericOptionParser should be adopted). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin

2018-06-25 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-94:
---
Attachment: HDDS-94.004.patch

> Change ozone datanode command to start the standalone datanode plugin
> -
>
> Key: HDDS-94
> URL: https://issues.apache.org/jira/browse/HDDS-94
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, 
> HDDS-94.004.patch
>
>
> The current ozone datanode command starts the regular hdfs datanode with an 
> enabled HddsDatanodeService as a datanode plugin.
> The goal is to start only the HddsDatanodeService.java (main function is 
> already there but GenericOptionParser should be adopted). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin

2018-06-25 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-94:
---
Attachment: HDDS-94.003.patch

> Change ozone datanode command to start the standalone datanode plugin
> -
>
> Key: HDDS-94
> URL: https://issues.apache.org/jira/browse/HDDS-94
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch
>
>
> The current ozone datanode command starts the regular hdfs datanode with an 
> enabled HddsDatanodeService as a datanode plugin.
> The goal is to start only the HddsDatanodeService.java (main function is 
> already there but GenericOptionParser should be adopted). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >