[jira] [Assigned] (HDFS-14413) HA Support for Dynamometer

2019-07-29 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDFS-14413:
---

Assignee: kevin su

> HA Support for Dynamometer
> --
>
> Key: HDFS-14413
> URL: https://issues.apache.org/jira/browse/HDFS-14413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: kevin su
>Priority: Major
>
> It would be nice if Dynamometer could handle spinning up a full 2 NN + 3 QJM 
> cluster instead of just a single NN



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14412) Enable Dynamometer to use the local build of Hadoop by default

2019-07-29 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDFS-14412:
---

Assignee: kevin su

> Enable Dynamometer to use the local build of Hadoop by default
> --
>
> Key: HDFS-14412
> URL: https://issues.apache.org/jira/browse/HDFS-14412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: kevin su
>Priority: Major
>
> Currently, by default, Dynamometer will download a Hadoop tarball from the 
> internet to use as the Hadoop version-under-test. Since it is bundled inside 
> of Hadoop now, it would make more sense for it to use the current version of 
> Hadoop by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14281) Dynamometer Phase 2

2019-07-31 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16897563#comment-16897563
 ] 

kevin su commented on HDFS-14281:
-

FYI [~xkrogen]

Due to dynamometer rename to hadoop-dynamometer in hadoop-tools

but we still use old name of jar inside the scripts
{code:java}
"$hadoop_cmd" jar "${script_pwd}"/lib/dynamometer-infra-*.jar 
org.apache.hadoop.tools.dynamometer.Client "$@"
{code}
We should rename these jar inside the scripts

> Dynamometer Phase 2
> ---
>
> Key: HDFS-14281
> URL: https://issues.apache.org/jira/browse/HDFS-14281
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: test, tools
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Phase 1: HDFS-12345
> This is the Phase 2 umbrella jira.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1912) start-ozone.sh fail due to ozone-config.sh not found

2019-08-05 Thread kevin su (JIRA)
kevin su created HDDS-1912:
--

 Summary: start-ozone.sh fail due to ozone-config.sh not found 
 Key: HDDS-1912
 URL: https://issues.apache.org/jira/browse/HDDS-1912
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone CLI
Affects Versions: 0.5.0
Reporter: kevin su
 Fix For: 0.5.0


I want to run Ozone individually,but it will always find start-ozone.sh in the 
*$HAOOP_HOME/*libexec firstly

If file not found, it will fail

We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1912) start-ozone.sh fail due to ozone-config.sh not found

2019-08-05 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1912:
---
Description: 
I want to run Ozone individually,but it will always find start-ozone.sh in the 
*$HAOOP_HOME*/libexec firstly

If file not found, it will fail

We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec

  was:
I want to run Ozone individually,but it will always find start-ozone.sh in the 
*$HAOOP_HOME/*libexec firstly

If file not found, it will fail

We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec


> start-ozone.sh fail due to ozone-config.sh not found 
> -
>
> Key: HDDS-1912
> URL: https://issues.apache.org/jira/browse/HDDS-1912
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: kevin su
>Priority: Major
> Fix For: 0.5.0
>
>
> I want to run Ozone individually,but it will always find start-ozone.sh in 
> the *$HAOOP_HOME*/libexec firstly
> If file not found, it will fail
> We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1912) start-ozone.sh fail due to ozone-config.sh not found

2019-08-05 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1912:
---
Attachment: HDDS-1912.001.patch
Status: Patch Available  (was: Open)

> start-ozone.sh fail due to ozone-config.sh not found 
> -
>
> Key: HDDS-1912
> URL: https://issues.apache.org/jira/browse/HDDS-1912
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: kevin su
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1912.001.patch
>
>
> I want to run Ozone individually,but it will always find start-ozone.sh in 
> the *$HAOOP_HOME*/libexec firstly
> If file not found, it will fail
> We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1912) start-ozone.sh fail due to ozone-config.sh not found

2019-08-06 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900785#comment-16900785
 ] 

kevin su commented on HDDS-1912:


[~elek]  Assignee is not me, there is a person has same name with me. 

{quote}I have some strange feeling about the patch: it doesn't only include the 
good ozone-config.sh, but also sets the HADOOP_HOME and HADOOP_LIBEXEC_DIR. As 
you already have a hadoop install it can be confusing.
{quote}

Agree, there is a little confusing,we could change this patch much better 

> start-ozone.sh fail due to ozone-config.sh not found 
> -
>
> Key: HDDS-1912
> URL: https://issues.apache.org/jira/browse/HDDS-1912
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: kevin su
>Assignee: Kevin Su
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1912.001.patch
>
>
> I want to run Ozone individually,but it will always find start-ozone.sh in 
> the *$HAOOP_HOME*/libexec firstly
> If file not found, it will fail
> We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDDS-1919:
--

Assignee: kevin su

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1919:
---
Status: Patch Available  (was: Open)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-09 Thread kevin su (JIRA)
kevin su created HDFS-14717:
---

 Summary: Junit not found in hadoop-dynamometer-infra
 Key: HDFS-14717
 URL: https://issues.apache.org/jira/browse/HDFS-14717
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: kevin su


{code}
hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
org.apache.hadoop.tools.dynamometer.Client
{code}
{code}
Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
 at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.lang.ClassNotFoundException: org.junit.Assert
 at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-09 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14717:

Description: 
{code:java}
$ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
org.apache.hadoop.tools.dynamometer.Client
{code}
{code:java}
Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
 at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.lang.ClassNotFoundException: org.junit.Assert
 at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 7 more{code}

  was:
{code}
hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
org.apache.hadoop.tools.dynamometer.Client
{code}
{code}
Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
 at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.lang.ClassNotFoundException: org.junit.Assert
 at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 7 more{code}


> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Priority: Major
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-10 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDFS-14717:
---

Assignee: kevin su

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-10 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14717:

Attachment: HDFS-14717.001.patch

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14717.001.patch
>
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-10 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904361#comment-16904361
 ] 

kevin su commented on HDFS-14717:
-

[~jojochuang] [~smeng] Thanks for your reply 

It's due to  _*ClassUtil.findContainingJar(Assert.class)*_ can't find Junit, 
because Junit is in other directory.

I also found that _*Junit*_ Already package into  
_*hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar*_

So I remove _*ClassUtil.findContainingJar(Assert.class),*_ it works

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14717.001.patch
>
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-10 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14717:

Status: Patch Available  (was: Open)

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14717.001.patch
>
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-165) Add unit test for OzoneHddsDatanodeService

2019-08-11 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDDS-165:
-

Assignee: kevin su

> Add unit test for OzoneHddsDatanodeService
> --
>
> Key: HDDS-165
> URL: https://issues.apache.org/jira/browse/HDDS-165
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie, test
>
> We have to add unit-test for {{OzoneHddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-12 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14717:

Attachment: HDFS-14717.002.patch

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14717.001.patch, HDFS-14717.002.patch
>
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-12 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905637#comment-16905637
 ] 

kevin su commented on HDFS-14717:
-

[~xkrogen] Thanks for your reply 

{quote}It looks like the test failure is due to the recent cleanup of versions 
available on the Apache mirrors. The real long-term solution is to fix 
HDFS-14412, but in the meantime, I think we need to bump the default version 
used by the test to 3.1.2 from 3.1.1.{quote}

Make sense, if we use local build of Hadoop by default, we won't got failure by 
downloading Hadoop from Apache mirrors.

But it's seems like we need to build hadoop distribution firstly before run 
*_TestDynamometerInfra_* 
what if we directly run *_TestDynamometerInfra_* , it may fail.
Is there any way to build Hadoop inside the Unit Test 

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14717.001.patch, HDFS-14717.002.patch
>
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1959) Decrement purge interval for Ratis logs

2019-08-13 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDDS-1959:
--

Assignee: kevin su

> Decrement purge interval for Ratis logs
> ---
>
> Key: HDDS-1959
> URL: https://issues.apache.org/jira/browse/HDDS-1959
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: kevin su
>Priority: Major
>
> Currently purge interval for ratis log("ozone.om.ratis.log.purge.gap") is set 
> at 100. The Jira aims to reduce the interval and set it to 10.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-13 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16906770#comment-16906770
 ] 

kevin su commented on HDFS-14717:
-

[~xkrogen] [~smeng] Thanks for your  help

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14717.001.patch, HDFS-14717.002.patch, 
> HDFS-14717.003.patch
>
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1959) Decrement purge interval for Ratis logs

2019-08-14 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907799#comment-16907799
 ] 

kevin su commented on HDDS-1959:


[~msingh] sorry for the late reply, I just uploaded the patch. 

> Decrement purge interval for Ratis logs
> ---
>
> Key: HDDS-1959
> URL: https://issues.apache.org/jira/browse/HDDS-1959
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: kevin su
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently purge interval for ratis log("ozone.om.ratis.log.purge.gap") is set 
> at 100. The Jira aims to reduce the interval and set it to 10.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1959) Decrement purge interval for Ratis logs in datanode

2019-08-15 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907965#comment-16907965
 ] 

kevin su commented on HDDS-1959:


[~ljain] I updated the patch, Thanks for the review.

> Decrement purge interval for Ratis logs in datanode
> ---
>
> Key: HDDS-1959
> URL: https://issues.apache.org/jira/browse/HDDS-1959
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: kevin su
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently purge interval for ratis log("dfs.container.ratis.log.purge.gap") 
> is set at 10. The Jira aims to reduce the interval and set it to 
> 100.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDDS-1977:
--

Assignee: kevin su

> Fix checkstyle issues introduced by HDDS-1894
> -
>
> Key: HDDS-1977
> URL: https://issues.apache.org/jira/browse/HDDS-1977
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
>
> Fix the checkstyle issues introduced by HDDS-1894
> {noformat}
> [INFO] There are 6 errors reported by Checkstyle 8.8 with 
> checkstyle/checkstyle.xml ruleset.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
>  (sizes) LineLength: Line is longer than 80 characters (found 88).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
>  (sizes) LineLength: Line is longer than 80 characters (found 90).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
>  (sizes) LineLength: Line is longer than 80 characters (found 116).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
>  (sizes) LineLength: Line is longer than 80 characters (found 120).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1977:
---
Status: Patch Available  (was: Open)

> Fix checkstyle issues introduced by HDDS-1894
> -
>
> Key: HDDS-1977
> URL: https://issues.apache.org/jira/browse/HDDS-1977
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1977.001.patch
>
>
> Fix the checkstyle issues introduced by HDDS-1894
> {noformat}
> [INFO] There are 6 errors reported by Checkstyle 8.8 with 
> checkstyle/checkstyle.xml ruleset.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
>  (sizes) LineLength: Line is longer than 80 characters (found 88).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
>  (sizes) LineLength: Line is longer than 80 characters (found 90).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
>  (sizes) LineLength: Line is longer than 80 characters (found 116).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
>  (sizes) LineLength: Line is longer than 80 characters (found 120).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1977:
---
Attachment: HDDS-1977.001.patch

> Fix checkstyle issues introduced by HDDS-1894
> -
>
> Key: HDDS-1977
> URL: https://issues.apache.org/jira/browse/HDDS-1977
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1977.001.patch
>
>
> Fix the checkstyle issues introduced by HDDS-1894
> {noformat}
> [INFO] There are 6 errors reported by Checkstyle 8.8 with 
> checkstyle/checkstyle.xml ruleset.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
>  (sizes) LineLength: Line is longer than 80 characters (found 88).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
>  (sizes) LineLength: Line is longer than 80 characters (found 90).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
>  (sizes) LineLength: Line is longer than 80 characters (found 116).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
>  (sizes) LineLength: Line is longer than 80 characters (found 120).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1977:
---
Attachment: HDDS-1977.002.patch

> Fix checkstyle issues introduced by HDDS-1894
> -
>
> Key: HDDS-1977
> URL: https://issues.apache.org/jira/browse/HDDS-1977
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1977.001.patch, HDDS-1977.002.patch
>
>
> Fix the checkstyle issues introduced by HDDS-1894
> {noformat}
> [INFO] There are 6 errors reported by Checkstyle 8.8 with 
> checkstyle/checkstyle.xml ruleset.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
>  (sizes) LineLength: Line is longer than 80 characters (found 88).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
>  (sizes) LineLength: Line is longer than 80 characters (found 90).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
>  (sizes) LineLength: Line is longer than 80 characters (found 116).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
>  (sizes) LineLength: Line is longer than 80 characters (found 120).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14744) RBF: Non secured routers should not log in error mode when UGI is default.

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDFS-14744:
---

Assignee: kevin su  (was: CR Hota)

> RBF: Non secured routers should not log in error mode when UGI is default.
> --
>
> Key: HDFS-14744
> URL: https://issues.apache.org/jira/browse/HDFS-14744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14744.001.patch
>
>
> RouterClientProtocol#getMountPointStatus logs error when groups are not found 
> for default web user dr.who. The line should be logged in "error" mode for 
> secured cluster, for unsecured clusters, we may want to just specify "debug" 
> or else logs are filled up with this non-critical line
> {{ERROR org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: 
> Cannot get the remote user: There is no primary group for UGI dr.who 
> (auth:SIMPLE)}}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14744) RBF: Non secured routers should not log in error mode when UGI is default.

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDFS-14744:
---

Assignee: CR Hota  (was: kevin su)

> RBF: Non secured routers should not log in error mode when UGI is default.
> --
>
> Key: HDFS-14744
> URL: https://issues.apache.org/jira/browse/HDFS-14744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14744.001.patch
>
>
> RouterClientProtocol#getMountPointStatus logs error when groups are not found 
> for default web user dr.who. The line should be logged in "error" mode for 
> secured cluster, for unsecured clusters, we may want to just specify "debug" 
> or else logs are filled up with this non-critical line
> {{ERROR org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: 
> Cannot get the remote user: There is no primary group for UGI dr.who 
> (auth:SIMPLE)}}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909864#comment-16909864
 ] 

kevin su commented on HDDS-1979:


[~vivekratnavel]

Thanks for you contribution.

It looks like this issue duplicate with HDDS-1977

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909864#comment-16909864
 ] 

kevin su edited comment on HDDS-1979 at 8/18/19 1:39 AM:
-

[~vivekratnavel]

Thanks for your contribution.

It looks like this issue duplicate with HDDS-1977


was (Author: pingsutw):
[~vivekratnavel]

Thanks for you contribution.

It looks like this issue duplicate with HDDS-1977

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14746) Trivial test code update after HDFS-14687

2019-08-18 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDFS-14746:
---

Assignee: kevin su

> Trivial test code update after HDFS-14687
> -
>
> Key: HDFS-14746
> URL: https://issues.apache.org/jira/browse/HDFS-14746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Trivial
>
> Instead of getting erasure coding policy instance by id, it should use a 
> constant value.
> Change
> {code}
> ErasureCodingPolicy ecPolicy = SystemErasureCodingPolicies.getPolicies()
> .get(3);
> {code}
> to
> {code}
> ErasureCodingPolicy ecPolicy = 
> SystemErasureCodingPolicies.getByID(XOR_2_1_POLICY_ID);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14746) Trivial test code update after HDFS-14687

2019-08-18 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14746:

Attachment: HDFS-14746.001.patch

> Trivial test code update after HDFS-14687
> -
>
> Key: HDFS-14746
> URL: https://issues.apache.org/jira/browse/HDFS-14746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Trivial
> Attachments: HDFS-14746.001.patch
>
>
> Instead of getting erasure coding policy instance by id, it should use a 
> constant value.
> Change
> {code}
> ErasureCodingPolicy ecPolicy = SystemErasureCodingPolicies.getPolicies()
> .get(3);
> {code}
> to
> {code}
> ErasureCodingPolicy ecPolicy = 
> SystemErasureCodingPolicies.getByID(XOR_2_1_POLICY_ID);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14746) Trivial test code update after HDFS-14687

2019-08-18 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14746:

Status: Patch Available  (was: Open)

> Trivial test code update after HDFS-14687
> -
>
> Key: HDFS-14746
> URL: https://issues.apache.org/jira/browse/HDFS-14746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Trivial
> Attachments: HDFS-14746.001.patch
>
>
> Instead of getting erasure coding policy instance by id, it should use a 
> constant value.
> Change
> {code}
> ErasureCodingPolicy ecPolicy = SystemErasureCodingPolicies.getPolicies()
> .get(3);
> {code}
> to
> {code}
> ErasureCodingPolicy ecPolicy = 
> SystemErasureCodingPolicies.getByID(XOR_2_1_POLICY_ID);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14746) Trivial test code update after HDFS-14687

2019-08-19 Thread kevin su (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910969#comment-16910969
 ] 

kevin su commented on HDFS-14746:
-

Thanks [~surendrasingh] and [~jojochuang] for the review and commit.

> Trivial test code update after HDFS-14687
> -
>
> Key: HDFS-14746
> URL: https://issues.apache.org/jira/browse/HDFS-14746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Trivial
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14746.001.patch
>
>
> Instead of getting erasure coding policy instance by id, it should use a 
> constant value.
> Change
> {code}
> ErasureCodingPolicy ecPolicy = SystemErasureCodingPolicies.getPolicies()
> .get(3);
> {code}
> to
> {code}
> ErasureCodingPolicy ecPolicy = 
> SystemErasureCodingPolicies.getByID(XOR_2_1_POLICY_ID);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1998) TestSecureContainerServer#testClientServerRatisGrpc is failing

2019-08-21 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDDS-1998:
--

Assignee: kevin su

> TestSecureContainerServer#testClientServerRatisGrpc is failing
> --
>
> Key: HDDS-1998
> URL: https://issues.apache.org/jira/browse/HDDS-1998
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{TestSecureContainerServer#testClientServerRatisGrpc}} is failing on trunk 
> with the following error.
> {noformat}
> [ERROR] 
> testClientServerRatisGrpc(org.apache.hadoop.ozone.container.server.TestSecureContainerServer)
>   Time elapsed: 7.544 s  <<< ERROR!
> java.io.IOException:
> Failed to command cmdType: CreateContainer
> containerID: 1566379872577
> datanodeUuid: "87ebf146-2a8f-4060-8f06-615ed61a9fe0"
> createContainer {
> }
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientSpi.sendCommand(XceiverClientSpi.java:113)
>   at 
> org.apache.hadoop.ozone.container.server.TestSecureContainerServer.runTestClientServer(TestSecureContainerServer.java:206)
>   at 
> org.apache.hadoop.ozone.container.server.TestSecureContainerServer.runTestClientServerRatis(TestSecureContainerServer.java:157)
>   at 
> org.apache.hadoop.ozone.container.server.TestSecureContainerServer.testClientServerRatisGrpc(TestSecureContainerServer.java:132)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.ratis.protocol.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Block token verification failed. Fail to find any token (empty or null.)
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientSpi.sendCommand(XceiverClientSpi.java:110)
>   ... 29 more
> Caused by: org.apache.ratis.protocol.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Block token verification failed. Fail to find any token (empty or null.)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction$7(ContainerStateMachine.java:701)
>   at 
> java.util.concurrent.Completa

[jira] [Updated] (HDDS-1998) TestSecureContainerServer#testClientServerRatisGrpc is failing

2019-08-21 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1998:
---
Status: Patch Available  (was: Open)

> TestSecureContainerServer#testClientServerRatisGrpc is failing
> --
>
> Key: HDDS-1998
> URL: https://issues.apache.org/jira/browse/HDDS-1998
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{TestSecureContainerServer#testClientServerRatisGrpc}} is failing on trunk 
> with the following error.
> {noformat}
> [ERROR] 
> testClientServerRatisGrpc(org.apache.hadoop.ozone.container.server.TestSecureContainerServer)
>   Time elapsed: 7.544 s  <<< ERROR!
> java.io.IOException:
> Failed to command cmdType: CreateContainer
> containerID: 1566379872577
> datanodeUuid: "87ebf146-2a8f-4060-8f06-615ed61a9fe0"
> createContainer {
> }
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientSpi.sendCommand(XceiverClientSpi.java:113)
>   at 
> org.apache.hadoop.ozone.container.server.TestSecureContainerServer.runTestClientServer(TestSecureContainerServer.java:206)
>   at 
> org.apache.hadoop.ozone.container.server.TestSecureContainerServer.runTestClientServerRatis(TestSecureContainerServer.java:157)
>   at 
> org.apache.hadoop.ozone.container.server.TestSecureContainerServer.testClientServerRatisGrpc(TestSecureContainerServer.java:132)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.ratis.protocol.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Block token verification failed. Fail to find any token (empty or null.)
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientSpi.sendCommand(XceiverClientSpi.java:110)
>   ... 29 more
> Caused by: org.apache.ratis.protocol.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Block token verification failed. Fail to find any token (empty or null.)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction$7(ContainerStateMachine.java:701)
>   at 
> java.util.concurre

[jira] [Commented] (HDFS-14412) Enable Dynamometer to use the local build of Hadoop by default

2019-08-23 Thread kevin su (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914226#comment-16914226
 ] 

kevin su commented on HDFS-14412:
-

[~xkrogen] Thanks for your help 

I run the test after package now, but it looks like the fsimage under 
/test/resources not work for hadoop-3.2+  

Could we update fsimage and blocks as well ?

> Enable Dynamometer to use the local build of Hadoop by default
> --
>
> Key: HDFS-14412
> URL: https://issues.apache.org/jira/browse/HDFS-14412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: kevin su
>Priority: Major
>
> Currently, by default, Dynamometer will download a Hadoop tarball from the 
> internet to use as the Hadoop version-under-test. Since it is bundled inside 
> of Hadoop now, it would make more sense for it to use the current version of 
> Hadoop by default.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14413) HA Support for Dynamometer

2019-09-05 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14413:

Attachment: HDFS-14413.001.patch
Status: Patch Available  (was: Open)

> HA Support for Dynamometer
> --
>
> Key: HDFS-14413
> URL: https://issues.apache.org/jira/browse/HDFS-14413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14413.001.patch
>
>
> It would be nice if Dynamometer could handle spinning up a full 2 NN + 3 QJM 
> cluster instead of just a single NN



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14413) HA Support for Dynamometer

2019-09-05 Thread kevin su (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923603#comment-16923603
 ] 

kevin su commented on HDFS-14413:
-

Upload draft patch v001 
 # Add two options (numTotalNameNodes and in numTotalJournalNodes) in CLI, so 
user can run more than two NN and 3 JN, but required NodeManager >= max(JN,NN), 
because JN and NN can't run on same machine 
 # Add ZKFC so if one NN die in a container, other SNN would be active soon, 
but it required user *specify ha.zookeeper.quorum* and 
*dfs.ha.fencing.ssh.private-key-files* in Dynamometer  config
 # Fix some checkstyle error 

I will add Unit test in next patch. 

> HA Support for Dynamometer
> --
>
> Key: HDFS-14413
> URL: https://issues.apache.org/jira/browse/HDFS-14413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14413.001.patch
>
>
> It would be nice if Dynamometer could handle spinning up a full 2 NN + 3 QJM 
> cluster instead of just a single NN



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-03 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDDS-2245:
--

Assignee: kevin su

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-04 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-2245:
---
Attachment: HDDS-2245.001.patch
Status: Patch Available  (was: Open)

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-2245.001.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-04 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-2245:
---
Attachment: HDDS-2245.002.patch

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-2245.001.patch, HDDS-2245.002.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-07 Thread kevin su (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16946334#comment-16946334
 ] 

kevin su commented on HDDS-2245:


Thanks [~aengineer] for the help and commit  

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-2245.001.patch, HDDS-2245.002.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14413) HA Support for Dynamometer

2019-11-15 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14413:

Attachment: HDFS-14413.002.patch

> HA Support for Dynamometer
> --
>
> Key: HDFS-14413
> URL: https://issues.apache.org/jira/browse/HDFS-14413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14413.001.patch, HDFS-14413.002.patch
>
>
> It would be nice if Dynamometer could handle spinning up a full 2 NN + 3 QJM 
> cluster instead of just a single NN



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14413) HA Support for Dynamometer

2019-11-15 Thread kevin su (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16974965#comment-16974965
 ] 

kevin su commented on HDFS-14413:
-

Patch2:

1. rebase code
2. fix checkstyle 

> HA Support for Dynamometer
> --
>
> Key: HDFS-14413
> URL: https://issues.apache.org/jira/browse/HDFS-14413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14413.001.patch, HDFS-14413.002.patch
>
>
> It would be nice if Dynamometer could handle spinning up a full 2 NN + 3 QJM 
> cluster instead of just a single NN



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14413) HA Support for Dynamometer

2019-11-15 Thread kevin su (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923603#comment-16923603
 ] 

kevin su edited comment on HDFS-14413 at 11/15/19 9:43 AM:
---

Upload draft patch v001 
 # Add two options (numTotalNameNodes and in numTotalJournalNodes) in CLI, so 
user can run more than two NN and 3 JN, but required NodeManager >= max(JN,NN), 
because JN or NN can't run on same machine 
 # Add ZKFC so if one NN die in a container, other SNN would be active soon, 
but it required user *specify ha.zookeeper.quorum* and 
*dfs.ha.fencing.ssh.private-key-files* in Dynamometer  config
 # Fix some checkstyle error 

I will add Unit test in next patch. 


was (Author: pingsutw):
Upload draft patch v001 
 # Add two options (numTotalNameNodes and in numTotalJournalNodes) in CLI, so 
user can run more than two NN and 3 JN, but required NodeManager >= max(JN,NN), 
because JN and NN can't run on same machine 
 # Add ZKFC so if one NN die in a container, other SNN would be active soon, 
but it required user *specify ha.zookeeper.quorum* and 
*dfs.ha.fencing.ssh.private-key-files* in Dynamometer  config
 # Fix some checkstyle error 

I will add Unit test in next patch. 

> HA Support for Dynamometer
> --
>
> Key: HDFS-14413
> URL: https://issues.apache.org/jira/browse/HDFS-14413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14413.001.patch, HDFS-14413.002.patch
>
>
> It would be nice if Dynamometer could handle spinning up a full 2 NN + 3 QJM 
> cluster instead of just a single NN



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org