[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-03-01 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781813#comment-16781813
 ] 

Xiaoyu Yao commented on HDDS-134:
-

{color:#33}[~ajayydv], patch v9 removes value of 
ozone.scm.security.service.address from ozone-default.xml but introduced 
failures in {color}testSecureOmInitFailures, the validation needs to be updated 
for testSecureOmInitFailures. +1 after that being fixed. 

 

Other test failures are unrelated. 

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch, HDDS-134.01.patch, HDDS-134.02.patch, 
> HDDS-134.03.patch, HDDS-134.04.patch, HDDS-134.05.patch, HDDS-134.06.patch, 
> HDDS-134.07.patch, HDDS-134.08.patch, HDDS-134.09.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1204) Fix misc issues running ozonesecure docker-compose with Java 11

2019-02-28 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1204:
-
Description: 
The ozonesecure docker-compose has been changed to use hadoop-runner image 
based on Java 11. Several packages/classes have been removed from Java 8 such 
as 

javax.xml.bind.DatatypeConverter.parseHexBinary

 This ticket is opened to fix issues running ozonesecure docker-compose on java 
11.

  was:
The ozonesecure docker-compose has been changed to use hadoop-runner image 
based on Java 11. Several class has been removed from Java 8 such as 

javax.xml.bind.DatatypeConverter.parseHexBinary

 

This ticket is opened to fix issues running ozonesecure docker-compose on java 
11.


> Fix misc issues running ozonesecure docker-compose with Java 11
> ---
>
> Key: HDDS-1204
> URL: https://issues.apache.org/jira/browse/HDDS-1204
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1204.001.patch
>
>
> The ozonesecure docker-compose has been changed to use hadoop-runner image 
> based on Java 11. Several packages/classes have been removed from Java 8 such 
> as 
> javax.xml.bind.DatatypeConverter.parseHexBinary
>  This ticket is opened to fix issues running ozonesecure docker-compose on 
> java 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1204) Fix misc issues running ozonesecure docker-compose with Java 11

2019-02-28 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1204:
-
Summary: Fix misc issues running ozonesecure docker-compose with Java 11  
(was: Fix misc issue to make ozonesecure docker-compose work on Java 11)

> Fix misc issues running ozonesecure docker-compose with Java 11
> ---
>
> Key: HDDS-1204
> URL: https://issues.apache.org/jira/browse/HDDS-1204
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1204.001.patch
>
>
> The ozonesecure docker-compose has been changed to use hadoop-runner image 
> based on Java 11. Several class has been removed from Java 8 such as 
> javax.xml.bind.DatatypeConverter.parseHexBinary
>  
> This ticket is opened to fix issues running ozonesecure docker-compose on 
> java 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1204) Fix misc issue to make ozonesecure docker-compose work on Java 11

2019-02-28 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1204:
-
Status: Patch Available  (was: Open)

> Fix misc issue to make ozonesecure docker-compose work on Java 11
> -
>
> Key: HDDS-1204
> URL: https://issues.apache.org/jira/browse/HDDS-1204
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1204.001.patch
>
>
> The ozonesecure docker-compose has been changed to use hadoop-runner image 
> based on Java 11. Several class has been removed from Java 8 such as 
> javax.xml.bind.DatatypeConverter.parseHexBinary
>  
> This ticket is opened to fix issues running ozonesecure docker-compose on 
> java 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1204) Fix misc issue to make ozonesecure docker-compose work on Java 11

2019-02-28 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1204:
-
Attachment: HDDS-1204.001.patch

> Fix misc issue to make ozonesecure docker-compose work on Java 11
> -
>
> Key: HDDS-1204
> URL: https://issues.apache.org/jira/browse/HDDS-1204
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1204.001.patch
>
>
> The ozonesecure docker-compose has been changed to use hadoop-runner image 
> based on Java 11. Several class has been removed from Java 8 such as 
> javax.xml.bind.DatatypeConverter.parseHexBinary
>  
> This ticket is opened to fix issues running ozonesecure docker-compose on 
> java 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1204) Fix misc issue to make ozonesecure docker-compose work on Java 11

2019-02-28 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1204:


 Summary: Fix misc issue to make ozonesecure docker-compose work on 
Java 11
 Key: HDDS-1204
 URL: https://issues.apache.org/jira/browse/HDDS-1204
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


The ozonesecure docker-compose has been changed to use hadoop-runner image 
based on Java 11. Several class has been removed from Java 8 such as 

javax.xml.bind.DatatypeConverter.parseHexBinary

 

This ticket is opened to fix issues running ozonesecure docker-compose on java 
11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-28 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781295#comment-16781295
 ] 

Xiaoyu Yao edited comment on HDDS-134 at 3/1/19 5:09 AM:
-

Thanks [~ajayydv] for the update, patch v9 LGTM. +1 pending Jenkins. 


was (Author: xyao):
Thanks [~ajayydv] for the udpate, patch v9 LGTM. +1 pending Jenkins. 

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch, HDDS-134.01.patch, HDDS-134.02.patch, 
> HDDS-134.03.patch, HDDS-134.04.patch, HDDS-134.05.patch, HDDS-134.06.patch, 
> HDDS-134.07.patch, HDDS-134.08.patch, HDDS-134.09.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-28 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781295#comment-16781295
 ] 

Xiaoyu Yao commented on HDDS-134:
-

Thanks [~ajayydv] for the udpate, patch v9 LGTM. +1 pending Jenkins. 

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch, HDDS-134.01.patch, HDDS-134.02.patch, 
> HDDS-134.03.patch, HDDS-134.04.patch, HDDS-134.05.patch, HDDS-134.06.patch, 
> HDDS-134.07.patch, HDDS-134.08.patch, HDDS-134.09.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-28 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780804#comment-16780804
 ] 

Xiaoyu Yao commented on HDDS-1190:
--

Thanks [~elek] for the review and commit. 

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1190:
-
Status: Patch Available  (was: Open)

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1190:
-
Attachment: HDDS-1190-docker-hadoop-runner.001.patch

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1190:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-4

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780054#comment-16780054
 ] 

Xiaoyu Yao commented on HDDS-1190:
--

cc [~elek]

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-docker-hadoop-runner.001.patch, 
> HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1190:
-
Attachment: HDDS-1190-trunk.001.patch

> Fix jdk 11 issue for ozonesecure base image and docker-compose 
> ---
>
> Key: HDDS-1190
> URL: https://issues.apache.org/jira/browse/HDDS-1190
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1190-trunk.001.patch
>
>
> HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
> docker-compose. There are a few issues that need to fixed.
>  
> 1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
> openjdk8 for JAVA_HOME. 
>  
> 2. The KEYTAB_DIR needs to be quoted with '.
>  
> 3. keytab based login failed with Message stream modified (41), [~elek] 
> mentioned in HDDS-1019 that we need to add max_renewable_life to 
> "docker-image/docker-krb5/krb5.conf" like follows.
> [realms]
>  EXAMPLE.COM = \{
>   kdc = localhost
>   admin_server = localhost
>   max_renewable_life = 7d
>  }
> Failures:
> {code}
>  org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
> javax.security.auth.login.LoginException: Message stream modified (41)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
> scm_1           |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1190) Fix jdk 11 issue for ozonesecure base image and docker-compose

2019-02-27 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1190:


 Summary: Fix jdk 11 issue for ozonesecure base image and 
docker-compose 
 Key: HDDS-1190
 URL: https://issues.apache.org/jira/browse/HDDS-1190
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HDDS-1019 changes to use hadoop-runner as base image for ozonesecure 
docker-compose. There are a few issues that need to fixed.

 

1.The hadoop-runner uses jdk11 but the ozonesecure/docker-config assume 
openjdk8 for JAVA_HOME. 

 

2. The KEYTAB_DIR needs to be quoted with '.

 

3. keytab based login failed with Message stream modified (41), [~elek] 
mentioned in HDDS-1019 that we need to add max_renewable_life to 
"docker-image/docker-krb5/krb5.conf" like follows.
[realms]
 EXAMPLE.COM = \{
  kdc = localhost
  admin_server = localhost
  max_renewable_life = 7d
 }
Failures:

{code}

 org.apache.hadoop.security.KerberosAuthException: failure to login: for 
principal: scm/s...@example.com from keytab /etc/security/keytabs/scm.keytab 
javax.security.auth.login.LoginException: Message stream modified (41)

scm_1           | at 
org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)

scm_1           |

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1183) Override getDelegationToken API for OzoneFileSystem

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1183:
-
Status: Patch Available  (was: Open)

> Override getDelegationToken API for OzoneFileSystem
> ---
>
> Key: HDDS-1183
> URL: https://issues.apache.org/jira/browse/HDDS-1183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This includes addDelegationToken/renewDelegationToken/cancelDelegationToken 
> so that MR jobs can collect tokens correctly upon job submission time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779876#comment-16779876
 ] 

Xiaoyu Yao edited comment on HDDS-1061 at 2/27/19 11:16 PM:


Thanks [~ajayydv] for the update. +1 for the v7 patch pending Jenkins.


was (Author: xyao):
Thanks [~ajayydv] for the update. +1 for the v6 patch and I will commit it 
shortly.

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch, HDDS-1061.04.patch, 
> HDDS-1061.05.patch, HDDS-1061.06.patch, HDDS-1061.07.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779876#comment-16779876
 ] 

Xiaoyu Yao commented on HDDS-1061:
--

Thanks [~ajayydv] for the update. +1 for the v6 patch and I will commit it 
shortly.

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch, HDDS-1061.04.patch, 
> HDDS-1061.05.patch, HDDS-1061.06.patch, HDDS-1061.07.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1183) Override getDelegationToken API for OzoneFileSystem

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1183:
-
Summary: Override getDelegationToken API for OzoneFileSystem  (was: 
OzoneFileSystem needs to override delegation token APIs)

> Override getDelegationToken API for OzoneFileSystem
> ---
>
> Key: HDDS-1183
> URL: https://issues.apache.org/jira/browse/HDDS-1183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This includes addDelegationToken/renewDelegationToken/cancelDelegationToken 
> so that MR jobs can collect tokens correctly upon job submission time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1176) Allow persisting X509CertImpl to SCM certificate table

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1176:
-
Summary: Allow persisting X509CertImpl to SCM certificate table  (was: Fix 
storage of certificate in scm db)

> Allow persisting X509CertImpl to SCM certificate table
> --
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1176.001.patch
>
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1176) Fix storage of certificate in scm db

2019-02-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1176:
-
Target Version/s: 0.4.0

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1176.001.patch
>
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1176) Fix storage of certificate in scm db

2019-02-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778967#comment-16778967
 ] 

Xiaoyu Yao edited comment on HDDS-1176 at 2/27/19 6:17 PM:
---

I think it is the CodecRegistry not flexible to handle Typed data with class, 
subclass hierarchy. Here abstract base class *X509Certificate* is registered 
with the CodecRegistry but subclass *X509CertImpl* is passed in to 
serialize/deserialize which relies on getClass() of the subclass 
*X509CertImpl,* which does not match *X509Certificate.*

Post a simple patch that allows subclass to match with base class registry for 
serialization/deserialization. 


was (Author: xyao):
I think it is the CodecRegistry not flexible to handle Typed data with class, 
subclass hierarchy. Here the class is registered with the CodecRegistry but the 
subclass is passed in to serialize/deserialize where getClass() return only the 
subclass. 

 

Post a simple patch allow subclass using base class registry for 
serialization/deserialization. 

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1176.001.patch
>
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1183) OzoneFileSystem needs to override delegation token APIs

2019-02-26 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1183:


 Summary: OzoneFileSystem needs to override delegation token APIs
 Key: HDDS-1183
 URL: https://issues.apache.org/jira/browse/HDDS-1183
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This includes addDelegationToken/renewDelegationToken/cancelDelegationToken so 
that MR jobs can collect tokens correctly upon job submission time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1176) Fix storage of certificate in scm db

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1176:
-
Status: Patch Available  (was: Open)

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1176.001.patch
>
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1176) Fix storage of certificate in scm db

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1176:
-
Attachment: HDDS-1176.001.patch

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1176.001.patch
>
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1176) Fix storage of certificate in scm db

2019-02-26 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778967#comment-16778967
 ] 

Xiaoyu Yao commented on HDDS-1176:
--

I think it is the CodecRegistry not flexible to handle Typed data with class, 
subclass hierarchy. Here the class is registered with the CodecRegistry but the 
subclass is passed in to serialize/deserialize where getClass() return only the 
subclass. 

 

Post a simple patch allow subclass using base class registry for 
serialization/deserialization. 

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1176) Fix storage of certificate in scm db

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1176:


Assignee: Xiaoyu Yao  (was: Anu Engineer)

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-26 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778149#comment-16778149
 ] 

Xiaoyu Yao edited comment on HDDS-1019 at 2/26/19 5:38 PM:
---

{quote}I would suggest to delete the ozonesecure/docker-image/runner directory 
during this commit. I you agree, I will commit it to the trunk.
{quote}
Agree, attach a new trunk patch that removes the 
ozonesecure/docker-image/runner but keeps ozonesecure/docker-image/docker-krb5. 


was (Author: xyao):
{quote}I would suggest to delete the ozonesecure/docker-image/runner directory 
during this commit. I you agree, I will commit it to the trunk.
{quote}
Agree, +1 to remove the ozonesecure/docker-image/runner but keep 
ozonesecure/docker-image/docker-krb5. 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, 
> HDDS-1019-docker-hadoop-runner.03.patch, HDDS-1019-trunk.01.patch, 
> HDDS-1019-trunk.02.patch, HDDS-1019-trunk.03.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019-trunk.03.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, 
> HDDS-1019-docker-hadoop-runner.03.patch, HDDS-1019-trunk.01.patch, 
> HDDS-1019-trunk.02.patch, HDDS-1019-trunk.03.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-26 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778149#comment-16778149
 ] 

Xiaoyu Yao commented on HDDS-1019:
--

{quote}I would suggest to delete the ozonesecure/docker-image/runner directory 
during this commit. I you agree, I will commit it to the trunk.
{quote}
Agree, +1 to remove the ozonesecure/docker-image/runner but keep 
ozonesecure/docker-image/docker-krb5. 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, 
> HDDS-1019-docker-hadoop-runner.03.patch, HDDS-1019-trunk.01.patch, 
> HDDS-1019-trunk.02.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-26 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777993#comment-16777993
 ] 

Xiaoyu Yao commented on HDDS-1061:
--

Fix the trivial build issue where the subclass should call getCertClient() 
instead of using private member directly. 

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch, HDDS-1061.04.patch, HDDS-1061.05.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1061:
-
Attachment: HDDS-1061.05.patch

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch, HDDS-1061.04.patch, HDDS-1061.05.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Support Service Level Authorization for Ozone

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1038:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks all for the reviews. I've commit the patch to trunk. 

I've validated that the unit test failures and build issues are unrelated to 
this patch. 

> Support Service Level Authorization for Ozone
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch, 
> HDDS-1038.08.patch, HDDS-1038.09.patch, HDDS-1038.10.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Support Service Level Authorization for Ozone

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1038:
-
Summary: Support Service Level Authorization for Ozone  (was: Support 
Service Level Authorization for OM, SCM and DN)

> Support Service Level Authorization for Ozone
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch, 
> HDDS-1038.08.patch, HDDS-1038.09.patch, HDDS-1038.10.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1179) Ozone dist build failed on Jenkins

2019-02-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1179:
-
Attachment: HDDS-1179.002.patch

> Ozone dist build failed on Jenkins
> --
>
> Key: HDDS-1179
> URL: https://issues.apache.org/jira/browse/HDDS-1179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1179.001.patch, HDDS-1179.002.patch
>
>
> This is part of the Jenkins execution and was reported in several latest HDDS 
> Jenkins run.
> I spend some today and found a simplified repro steps:
> {code}
> cd hadoop-ozone/dist
> mvn -Phdds -DskipTests -fae clean install -DskipTests=true 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true 
> {code}
>  
> The root cause is that the 
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching need 
> objectstore-sevice jar being build earlier but he dependency was not 
> explicitly declared in pom. I will attach a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1179) Ozone dist build failed on Jenkins

2019-02-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1179:
-
Status: Patch Available  (was: Open)

> Ozone dist build failed on Jenkins
> --
>
> Key: HDDS-1179
> URL: https://issues.apache.org/jira/browse/HDDS-1179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1179.001.patch
>
>
> This is part of the Jenkins execution and was reported in several latest HDDS 
> Jenkins run.
> I spend some today and found a simplified repro steps:
> {code}
> cd hadoop-ozone/dist
> mvn -Phdds -DskipTests -fae clean install -DskipTests=true 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true 
> {code}
>  
> The root cause is that the 
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching need 
> objectstore-sevice jar being build earlier but he dependency was not 
> explicitly declared in pom. I will attach a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1179) Ozone dist build failed on Jenkins

2019-02-25 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777458#comment-16777458
 ] 

Xiaoyu Yao commented on HDDS-1179:
--

Related Error message from the repro steps.

{code}

[INFO] Apache Hadoop Ozone Distribution 0.4.0-SNAPSHOT  FAILURE [ 1.305 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 07:26 min
[INFO] Finished at: 2019-02-25T12:01:32-08:00
[INFO] 
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
(dist) on project hadoop-ozone-dist: Command execution failed.: Process exited 
with an error: 1 (Exit value: 1) -> [Help 1]
Seems one file copy failed:
$ mkdir -p ./share/hadoop/ozoneplugin
$ cp 
/Users/xyao/git/hadoop/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore-service-0.4.0-SNAPSHOT-plugin.jar
 ./share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.4.0-SNAPSHOT.jar

Failed!

{code}

> Ozone dist build failed on Jenkins
> --
>
> Key: HDDS-1179
> URL: https://issues.apache.org/jira/browse/HDDS-1179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1179.001.patch
>
>
> This is part of the Jenkins execution and was reported in several latest HDDS 
> Jenkins run.
> I spend some today and found a simplified repro steps:
> {code}
> cd hadoop-ozone/dist
> mvn -Phdds -DskipTests -fae clean install -DskipTests=true 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true 
> {code}
>  
> The root cause is that the 
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching need 
> objectstore-sevice jar being build earlier but he dependency was not 
> explicitly declared in pom. I will attach a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1179) Ozone dist build failed on Jenkins

2019-02-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1179:
-
Attachment: HDDS-1179.001.patch

> Ozone dist build failed on Jenkins
> --
>
> Key: HDDS-1179
> URL: https://issues.apache.org/jira/browse/HDDS-1179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1179.001.patch
>
>
> This is part of the Jenkins execution and was reported in several latest HDDS 
> Jenkins run.
> I spend some today and found a simplified repro steps:
> {code}
> cd hadoop-ozone/dist
> mvn -Phdds -DskipTests -fae clean install -DskipTests=true 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true 
> {code}
>  
> The root cause is that the 
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching need 
> objectstore-sevice jar being build earlier but he dependency was not 
> explicitly declared in pom. I will attach a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1179) Ozone dist build failed on Jenkins

2019-02-25 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1179:


 Summary: Ozone dist build failed on Jenkins
 Key: HDDS-1179
 URL: https://issues.apache.org/jira/browse/HDDS-1179
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This is part of the Jenkins execution and was reported in several latest HDDS 
Jenkins run.

I spend some today and found a simplified repro steps:

{code}

cd hadoop-ozone/dist

mvn -Phdds -DskipTests -fae clean install -DskipTests=true 
-Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true 

{code}

 

The root cause is that the 
hadoop-ozone/dist/dev-support/bin/dist-layout-stitching need objectstore-sevice 
jar being build earlier but he dependency was not explicitly declared in pom. I 
will attach a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-25 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777368#comment-16777368
 ] 

Xiaoyu Yao commented on HDDS-1038:
--

Attach patch v10 that fix the findbugs and the unit test failure related. 

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch, 
> HDDS-1038.08.patch, HDDS-1038.09.patch, HDDS-1038.10.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1038:
-
Attachment: HDDS-1038.10.patch

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch, 
> HDDS-1038.08.patch, HDDS-1038.09.patch, HDDS-1038.10.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1038:


Assignee: Xiaoyu Yao  (was: Ajay Kumar)

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch, 
> HDDS-1038.08.patch, HDDS-1038.09.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1043) Enable token based authentication for S3 api

2019-02-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1043:
-
Status: Open  (was: Patch Available)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch
>
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019-trunk.02.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, 
> HDDS-1019-docker-hadoop-runner.03.patch, HDDS-1019-trunk.01.patch, 
> HDDS-1019-trunk.02.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019-docker-hadoop-runner.03.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, 
> HDDS-1019-docker-hadoop-runner.03.patch, HDDS-1019-trunk.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-25 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777228#comment-16777228
 ] 

Xiaoyu Yao commented on HDDS-1019:
--

Thanks [~elek] for the detailed reviews. My response inline

 

One small suggestion:
{quote}bq. Can we remove the line?
{quote}
{code:java}
echo 'setup finished
{code}
Done.

 
{quote}bq. We need a set +e/set -e for checking the availability of the KDC 
service:
{quote}
Fixed.

 
{quote} $CONF_DIR is confusing (for me). I would use something like $KEYTAB_DIR 
instead. And I think the default could be /etc/security/keytabs (Now we have a 
hard dependency that the $CONF_DIR should be set for a secure environment. I 
think it's better to use a default value in starter.sh)
{quote}
Different deployment may require different keytab locations. Agree with the 
name is confusing. How about allowing customize KEYTAB_DIR from docker-config? 
If no value passed in, we will just use the default.

 
{quote}4. I would use the apache/hadoop-runner as a base image for the krb5 
image to use exactly the same mit kerberos (I noticed an error that the keytab 
versions are different before this change).

5. For centos the max_renewable_life is required in the krb5.conf.
{quote}
Can we handle the kdc base image change in a separate ticket, want to spend 
some time on the issue you mentioned above? 

 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, HDDS-1019-trunk.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-25 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777126#comment-16777126
 ] 

Xiaoyu Yao commented on HDDS-1038:
--

rebase to latest trunk. 

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch, 
> HDDS-1038.08.patch, HDDS-1038.09.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1038:
-
Attachment: HDDS-1038.09.patch

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch, 
> HDDS-1038.08.patch, HDDS-1038.09.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1075) Fix CertificateUtil#parseRSAPublicKey charsetName

2019-02-25 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777097#comment-16777097
 ] 

Xiaoyu Yao commented on HDDS-1075:
--

Good catch, [~ste...@apache.org]. I did not notice this cert util class is 
hadoop-common when opening the ticket and commit.  Will pay attention to that 
in the future. 

> Fix CertificateUtil#parseRSAPublicKey charsetName
> -
>
> Key: HDDS-1075
> URL: https://issues.apache.org/jira/browse/HDDS-1075
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Siddharth Wagle
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: HDDS-1075.01.patch
>
>
> We should use "UTF-8" instead of "UTF8". 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-22 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1038:
-
Attachment: HDDS-1038.08.patch

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch, HDDS-1038.08.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-21 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774667#comment-16774667
 ] 

Xiaoyu Yao commented on HDDS-1038:
--

[~ajayydv], it is an scm security protocol but all messages are dealing with 
scm CA certificates. Give the key has a prefix of security already, I think it 
is better to avoid 2 security in the same config key here. 

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-21 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774663#comment-16774663
 ] 

Xiaoyu Yao commented on HDDS-1061:
--

+1 for v3 patch pending Jenkins.

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1118) OM get the certificate from SCM CA for token validation

2019-02-21 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1118:
-
Labels: newbie  (was: )

> OM get the certificate from SCM CA for token validation
> ---
>
> Key: HDDS-1118
> URL: https://issues.apache.org/jira/browse/HDDS-1118
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: newbie
>
> This is needed when the OM received delegation token signed by other OM 
> instances and it does not have the certificate for foreign OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1118) OM get the certificate from SCM CA for token validation

2019-02-21 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1118:
-
Target Version/s: 0.5.0  (was: 0.4.0)

> OM get the certificate from SCM CA for token validation
> ---
>
> Key: HDDS-1118
> URL: https://issues.apache.org/jira/browse/HDDS-1118
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: newbie
>
> This is needed when the OM received delegation token signed by other OM 
> instances and it does not have the certificate for foreign OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-21 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774519#comment-16774519
 ] 

Xiaoyu Yao commented on HDDS-1061:
--

Thanks [~ajayydv] for the update. Given the fact that HDDS-1060 is in, can we 
call the new API to get the OM certificate for validation? 
|321|// TODO: This delegation token was issued by other OM instance. Fetch|
|322|// certificate from SCM using certificate serial.|

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch, HDDS-1061.03.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-21 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774513#comment-16774513
 ] 

Xiaoyu Yao edited comment on HDDS-1038 at 2/21/19 9:30 PM:
---

Add more document on enabling service authorization for ozone compoments with 
additional configurations added to ozoensecure docker config.

 

Tested with hadoop.security.authorization enabled and different acls 
configured. Will open follow up Jira to add robotest for it. 


was (Author: xyao):
Add more document on enabling service authorization for ozone compoments with 
additional configurations added to ozoensecure docker config.

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-21 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774513#comment-16774513
 ] 

Xiaoyu Yao commented on HDDS-1038:
--

Add more document on enabling service authorization for ozone compoments with 
additional configurations added to ozoensecure docker config.

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-21 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1038:
-
Attachment: HDDS-1038.07.patch

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch, HDDS-1038.07.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-21 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1038:
-
Attachment: HDDS-1038.06.patch

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, 
> HDDS-1038.05.patch, HDDS-1038.06.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Support Service Level Authorization for OM, SCM and DN

2019-02-21 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1038:
-
Summary: Support Service Level Authorization for OM, SCM and DN  (was: 
Datanode fails to connect with secure SCM)

> Support Service Level Authorization for OM, SCM and DN
> --
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, HDDS-1038.05.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-02-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773357#comment-16773357
 ] 

Xiaoyu Yao commented on HDDS-594:
-

[~ajayydv], can you rebase the patch as it does not apply to trunk any more? 
Thanks1 

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-594.00.patch, HDDS-594.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773346#comment-16773346
 ] 

Xiaoyu Yao commented on HDDS-1061:
--

Thanks [~ajayydv] for the update. I only have one question w.r.t. patch v3 

 

OzoneSecretManager.java

Line 207: I think we should use certClient to get the certificate based on 
theomCertSerialId from identifier instead of the certificate of the 
certClient(OM itlself). This may require overwrite verifySignature() method in 
OzoneDelegationTokenSecretManager and OzoneBlockTokenSecretManager class.

 

 

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1060) Add API to get OM certificate from SCM CA

2019-02-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1060:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution. I've commit the patch to trunk.

> Add API to get OM certificate from SCM CA
> -
>
> Key: HDDS-1060
> URL: https://issues.apache.org/jira/browse/HDDS-1060
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Blocker, Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1060.00.patch, HDDS-1060.01.patch, 
> HDDS-1060.02.patch, HDDS-1060.03.patch, HDDS-1060.04.patch
>
>
> Datanodes/OM need OM certificate to validate block tokens and delegation 
> tokens. 
> Add API for:
> 1. getCertificate(String certSerialId): To get certificate from SCM based on 
> certificate serial id.
> 2. getCACertificate(): To get CA certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1060) Add API to get OM certificate from SCM CA

2019-02-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1060:
-
Summary: Add API to get OM certificate from SCM CA  (was: Token: Add api to 
get OM certificate from SCM)

> Add API to get OM certificate from SCM CA
> -
>
> Key: HDDS-1060
> URL: https://issues.apache.org/jira/browse/HDDS-1060
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Blocker, Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1060.00.patch, HDDS-1060.01.patch, 
> HDDS-1060.02.patch, HDDS-1060.03.patch, HDDS-1060.04.patch
>
>
> Datanodes/OM need OM certificate to validate block tokens and delegation 
> tokens. 
> Add API for:
> 1. getCertificate(String certSerialId): To get certificate from SCM based on 
> certificate serial id.
> 2. getCACertificate(): To get CA certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1060) Token: Add api to get OM certificate from SCM

2019-02-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773290#comment-16773290
 ] 

Xiaoyu Yao commented on HDDS-1060:
--

Thanks [~ajayydv] for the update. Patch v3 LGTM.

I'd like to have unit tests to cover the new RPC message getCertificate(). We 
can address that in the follow up ticket. 

+1 for the v3 and I will commit it shortly. 

> Token: Add api to get OM certificate from SCM
> -
>
> Key: HDDS-1060
> URL: https://issues.apache.org/jira/browse/HDDS-1060
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Blocker, Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1060.00.patch, HDDS-1060.01.patch, 
> HDDS-1060.02.patch, HDDS-1060.03.patch, HDDS-1060.04.patch
>
>
> Datanodes/OM need OM certificate to validate block tokens and delegation 
> tokens. 
> Add API for:
> 1. getCertificate(String certSerialId): To get certificate from SCM based on 
> certificate serial id.
> 2. getCACertificate(): To get CA certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1038:
-
Attachment: HDDS-1038.05.patch

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, HDDS-1038.05.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773148#comment-16773148
 ] 

Xiaoyu Yao commented on HDDS-1038:
--

Attach patch v5 fix the unit test issue and also guard the refreshServiceAcl 
only if hadoop security authorization is enabled.  

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch, HDDS-1038.05.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1101:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~anu] for the contribution and all for the reviews. I've commit the 
patch to trunk. 

> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1101.000.patch, HDDS-1101.001.patch
>
>
> Make SCM CA write to the Metadata layer of SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-19 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772176#comment-16772176
 ] 

Xiaoyu Yao commented on HDDS-1101:
--

[~anu], I think [~ajayydv] means expose the 

SCMCertStore#getCertificateByID via CertificateServer interface. This way, that 
SCMSecurityProtocolServer can use it directly. 

[~ajayydv], do you think we can add that in HDDS-1060 since the focus of this 
ticket is to persist the certificate. 

> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1101.000.patch, HDDS-1101.001.patch
>
>
> Make SCM CA write to the Metadata layer of SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1041:
-
Attachment: HDDS-1041.004.patch

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1041.001.patch, HDDS-1041.002.patch, 
> HDDS-1041.003.patch, HDDS-1041.004.patch, Ozone Encryption At-Rest - 
> V2019.2.7.pdf, Ozone Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770007#comment-16770007
 ] 

Xiaoyu Yao commented on HDDS-1041:
--

Upload patch v4 that fixed checkstyle and the failure in 
TestResultCodes.codeMapping.

 

Other three failures seem from HDDS-981. Will open separate ticket for the fix.

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1041.001.patch, HDDS-1041.002.patch, 
> HDDS-1041.003.patch, HDDS-1041.004.patch, Ozone Encryption At-Rest - 
> V2019.2.7.pdf, Ozone Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769867#comment-16769867
 ] 

Xiaoyu Yao commented on HDDS-1041:
--

Rebase the patch to trunk.

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1041.001.patch, HDDS-1041.002.patch, 
> HDDS-1041.003.patch, Ozone Encryption At-Rest - V2019.2.7.pdf, Ozone 
> Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1041:
-
Attachment: HDDS-1041.003.patch

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1041.001.patch, HDDS-1041.002.patch, 
> HDDS-1041.003.patch, Ozone Encryption At-Rest - V2019.2.7.pdf, Ozone 
> Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1119) DN get the certificate from SCM CA for token validation

2019-02-15 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1119:


 Summary: DN get the certificate from SCM CA for token validation
 Key: HDDS-1119
 URL: https://issues.apache.org/jira/browse/HDDS-1119
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This is needed when the OM received delegation token signed by other OM 
instances and it does not have the certificate for foreign OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1119) DN get the certificate from SCM CA for token validation

2019-02-15 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1119:
-
Description: This is needed when the DN received block token signed by OM 
and it does not have the certificate that OM.  (was: This is needed when the OM 
received delegation token signed by other OM instances and it does not have the 
certificate for foreign OM.)

> DN get the certificate from SCM CA for token validation
> ---
>
> Key: HDDS-1119
> URL: https://issues.apache.org/jira/browse/HDDS-1119
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This is needed when the DN received block token signed by OM and it does not 
> have the certificate that OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1118) OM get the certificate from SCM CA for token validation

2019-02-15 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1118:


 Summary: OM get the certificate from SCM CA for token validation
 Key: HDDS-1118
 URL: https://issues.apache.org/jira/browse/HDDS-1118
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This is needed when the OM received delegation token signed by other OM 
instances and it does not have the certificate for foreign OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-15 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768920#comment-16768920
 ] 

Xiaoyu Yao edited comment on HDDS-1019 at 2/15/19 6:37 PM:
---

Summary of changes

1. trunk patch to use hadoop-runner image

2. docker-hadoop-runner patch to combine changes for secure docker and the 
latest hadoop-runner image. 

3. The kdc image is not updated with this patch.  

 

cc: [~elek], [~ajayydv]


was (Author: xyao):
Summary of changes

1. trunk patch to use hadoop-runner image

2. docker-hadoop-runner patch to combine changes for secure docker and the 
latest hadoop-runner image. 

3. The kdc image is not updated with this patch.  

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, HDDS-1019-trunk.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1110) OzoneManager need to login during init when security is enabled.

2019-02-15 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769581#comment-16769581
 ] 

Xiaoyu Yao commented on HDDS-1110:
--

 
{quote}FTR: After the initialization of kerberos I got a NPE from ozoneManager 
but it's an independent problem)
{quote}
 

NPE issue tracked by HDDS-, I think it will be fixed as part of HDDS-134. 
cc: [~ajakumar]

> OzoneManager need to login during init when security is enabled.
> 
>
> Key: HDDS-1110
> URL: https://issues.apache.org/jira/browse/HDDS-1110
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1110.01.patch
>
>
> HDDS-776/HDDS-972 changed when the om login code.
> Now the OM#init() may invoke SCM#getScmInfo() without a login, which failed 
> with the following. This ticket is opened to fix it.
>  
> {code}
> ozoneManager_1  | java.io.IOException: DestHost:destPort scm:9863 , 
> LocalHost:localPort om/172.19.0.4:0. Failed on local exception: 
> java.io.IOException: org.apache.hadoop.security.AccessControlException: 
> Client cannot authenticate via:[KERBEROS]
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> ozoneManager_1  | at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> ozoneManager_1  | at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> ozoneManager_1  | at com.sun.proxy.$Proxy28.getScmInfo(Unknown Source)
> ozoneManager_1  | at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.getScmInfo(ScmBlockLocationProtocolClientSideTranslatorPB.java:154)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.lambda$getScmInfo$1(OzoneManager.java:910)
> ozoneManager_1  | at 
> org.apache.hadoop.utils.RetriableTask.call(RetriableTask.java:56)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.getScmInfo(OzoneManager.java:911)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:873)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:842)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:771)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768948#comment-16768948
 ] 

Xiaoyu Yao commented on HDDS-1038:
--

Thanks [~ajayydv] for the update. The test failure seems related to 
core-site.xml is needed now with the service level authorization. Can you take 
a look and confirm?

 

{code}
h3. Error Message

core-site.xml not found
h3. Stacktrace

java.lang.RuntimeException: core-site.xml not found at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2957) at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2925) at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2805) at 
org.apache.hadoop.conf.Configuration.get(Configuration.java:1459) at 
org.apache.hadoop.security.authorize.ServiceAuthorizationManager.refreshWithLoadedConfiguration(ServiceAuthorizationManager.java:161)
 at 
org.apache.hadoop.security.authorize.ServiceAuthorizationManager.refresh(ServiceAuthorizationManager.java:150)
 at org.apache.hadoop.ipc.Server.refreshServiceAcl(Server.java:601)

{code}

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1086) Remove RaftClient from OM

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1086:
-
Target Version/s: 0.4.0

> Remove RaftClient from OM
> -
>
> Key: HDDS-1086
> URL: https://issues.apache.org/jira/browse/HDDS-1086
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: HA, OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1086.001.patch
>
>
> Currently we run RaftClient in OM which takes the incoming client requests 
> and submits it to the OM's Ratis server. This hop can be avoided if OM 
> submits the incoming client request directly to its Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1108:
-
Target Version/s: 0.4.0

> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1108.00.patch
>
>
> Add a check whether s3 bucket exists or not, before performing MPU operation.
> As now with out this check, user can still perform MPU operation on a deleted 
> bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768945#comment-16768945
 ] 

Xiaoyu Yao commented on HDDS-1101:
--

Thanks [~anu] for the patch. It looks good to me overall. Here are a few minor 
comments:

 

DefaultApprover.java

Line 104: is there a reason to use Time.monotonicNowNanos() as the serialID for 
the certificate? This maybe OK for a single SCM case. But the ID may collide 
when there are multiple SCM instances. Should reserve certain bits to partition 
the scm ids?

 

DefaultCAServer.java

Line 213: should we store after xcertHolder.complete(xcert);?

Line 245-250: should we wrap this with supplyAsync to make the revoke truly 
async?

 

StorageContainerManager.java

Line 266: NIT: typo "afte" should be "after"

Line 268: question wrt. the configurator usage: why don't we populate the value 
initialized back into the configurator with the setters or just assume only the 
injector will set it?

 

Line 531: should we move the certStore down to internal of DefaultCAServer?

 

TestOmMultiPartKeyInfoCodec.java

Line 57: NIT: typo: random

 

 

> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1101.000.patch
>
>
> Make SCM CA write to the Metadata layer of SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768920#comment-16768920
 ] 

Xiaoyu Yao edited comment on HDDS-1019 at 2/15/19 4:08 AM:
---

Summary of changes

1. trunk patch to use hadoop-runner image

2. docker-hadoop-runner patch to combine changes for secure docker and the 
latest hadoop-runner image. 

3. The kdc image is not updated with this patch.  


was (Author: xyao):
Summary of changes

1. trunk patch to use hadoop-runner image

2. docker-hadoop-runner patch to combine changes for secure docker and the 
latest hadoop-runner image. 

 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, HDDS-1019-trunk.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768920#comment-16768920
 ] 

Xiaoyu Yao commented on HDDS-1019:
--

Summary of changes

1. trunk patch to use hadoop-runner image

2. docker-hadoop-runner patch to combine changes for secure docker and the 
latest hadoop-runner image. 

 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, HDDS-1019-trunk.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019-trunk.01.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, HDDS-1019-trunk.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019-docker-hadoop-runner.02.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768919#comment-16768919
 ] 

Xiaoyu Yao commented on HDDS-1019:
--

Fix a permission issue on /data volume. 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: (was: HDDS-1019.01.patch)

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019-docker-hadoop-runner.01.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768843#comment-16768843
 ] 

Xiaoyu Yao commented on HDDS-1019:
--

Attach a patch that update the hadoop-runner base image with the necessary 
change for running ozone services in secure mode. 

Will fix the ozonesecure docker-compose in a separate patch as they are in 
different branch. 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019.01.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1111) OzoneManager NPE reading private key file.

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768813#comment-16768813
 ] 

Xiaoyu Yao commented on HDDS-:
--

This should be fixed after HDDS-134.

> OzoneManager NPE reading private key file.
> --
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> {code}
> ozoneManager_1  | 2019-02-14 23:21:51 ERROR OzoneManager:596 - Unable to read 
> key pair for OM.
> ozoneManager_1  | org.apache.hadoop.ozone.security.OzoneSecurityException: 
> Error reading private file for OzoneManager
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:638)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:594)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:1216)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:1007)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:768)
> ozoneManager_1  | Caused by: java.lang.NullPointerException
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:635)
> ozoneManager_1  | ... 4 more
> ozoneManager_1  | 2019-02-14 23:21:51 ERROR OzoneManager:772 - Failed to 
> start the OzoneManager.
> ozoneManager_1  | java.lang.RuntimeException: 
> org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading 
> private file for OzoneManager
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:597)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:1216)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:1007)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:768)
> ozoneManager_1  | Caused by: 
> org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading 
> private file for OzoneManager
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:638)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:594)
> ozoneManager_1  | ... 3 more
> ozoneManager_1  | Caused by: java.lang.NullPointerException
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:635)
> ozoneManager_1  | ... 4 more
> ozoneManager_1  | 2019-02-14 23:21:51 INFO  ExitUtil:210 - Exiting with 
> status 1: java.lang.RuntimeException: 
> org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading 
> private file for OzoneManager
> ozoneManager_1  | 2019-02-14 23:21:51 INFO  OzoneManager:51 - SHUTDOWN_MSG: 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1110) OzoneManager need to login during init when security is enabled.

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1110:
-
Status: Patch Available  (was: Open)

Repro and Tested with ozonesecure docker-compose. NPE issue tracked by 
HDDS-.

> OzoneManager need to login during init when security is enabled.
> 
>
> Key: HDDS-1110
> URL: https://issues.apache.org/jira/browse/HDDS-1110
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1110.01.patch
>
>
> HDDS-776/HDDS-972 changed when the om login code.
> Now the OM#init() may invoke SCM#getScmInfo() without a login, which failed 
> with the following. This ticket is opened to fix it.
>  
> {code}
> ozoneManager_1  | java.io.IOException: DestHost:destPort scm:9863 , 
> LocalHost:localPort om/172.19.0.4:0. Failed on local exception: 
> java.io.IOException: org.apache.hadoop.security.AccessControlException: 
> Client cannot authenticate via:[KERBEROS]
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> ozoneManager_1  | at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> ozoneManager_1  | at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> ozoneManager_1  | at com.sun.proxy.$Proxy28.getScmInfo(Unknown Source)
> ozoneManager_1  | at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.getScmInfo(ScmBlockLocationProtocolClientSideTranslatorPB.java:154)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.lambda$getScmInfo$1(OzoneManager.java:910)
> ozoneManager_1  | at 
> org.apache.hadoop.utils.RetriableTask.call(RetriableTask.java:56)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.getScmInfo(OzoneManager.java:911)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:873)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:842)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:771)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1110) OzoneManager need to login during init when security is enabled.

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1110:
-
Attachment: HDDS-1110.01.patch

> OzoneManager need to login during init when security is enabled.
> 
>
> Key: HDDS-1110
> URL: https://issues.apache.org/jira/browse/HDDS-1110
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1110.01.patch
>
>
> HDDS-776/HDDS-972 changed when the om login code.
> Now the OM#init() may invoke SCM#getScmInfo() without a login, which failed 
> with the following. This ticket is opened to fix it.
>  
> {code}
> ozoneManager_1  | java.io.IOException: DestHost:destPort scm:9863 , 
> LocalHost:localPort om/172.19.0.4:0. Failed on local exception: 
> java.io.IOException: org.apache.hadoop.security.AccessControlException: 
> Client cannot authenticate via:[KERBEROS]
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> ozoneManager_1  | at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> ozoneManager_1  | at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> ozoneManager_1  | at com.sun.proxy.$Proxy28.getScmInfo(Unknown Source)
> ozoneManager_1  | at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.getScmInfo(ScmBlockLocationProtocolClientSideTranslatorPB.java:154)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.lambda$getScmInfo$1(OzoneManager.java:910)
> ozoneManager_1  | at 
> org.apache.hadoop.utils.RetriableTask.call(RetriableTask.java:56)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.getScmInfo(OzoneManager.java:911)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:873)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:842)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:771)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1111) OzoneManager NPE reading private key file.

2019-02-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-:


 Summary: OzoneManager NPE reading private key file.
 Key: HDDS-
 URL: https://issues.apache.org/jira/browse/HDDS-
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


{code}

ozoneManager_1  | 2019-02-14 23:21:51 ERROR OzoneManager:596 - Unable to read 
key pair for OM.

ozoneManager_1  | org.apache.hadoop.ozone.security.OzoneSecurityException: 
Error reading private file for OzoneManager

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:638)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:594)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:1216)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:1007)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:768)

ozoneManager_1  | Caused by: java.lang.NullPointerException

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:635)

ozoneManager_1  | ... 4 more

ozoneManager_1  | 2019-02-14 23:21:51 ERROR OzoneManager:772 - Failed to start 
the OzoneManager.

ozoneManager_1  | java.lang.RuntimeException: 
org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading private 
file for OzoneManager

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:597)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:1216)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:1007)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:768)

ozoneManager_1  | Caused by: 
org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading private 
file for OzoneManager

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:638)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:594)

ozoneManager_1  | ... 3 more

ozoneManager_1  | Caused by: java.lang.NullPointerException

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:635)

ozoneManager_1  | ... 4 more

ozoneManager_1  | 2019-02-14 23:21:51 INFO  ExitUtil:210 - Exiting with status 
1: java.lang.RuntimeException: 
org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading private 
file for OzoneManager

ozoneManager_1  | 2019-02-14 23:21:51 INFO  OzoneManager:51 - SHUTDOWN_MSG: 

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1110) OzoneManager need to login during init when security is enabled.

2019-02-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1110:


 Summary: OzoneManager need to login during init when security is 
enabled.
 Key: HDDS-1110
 URL: https://issues.apache.org/jira/browse/HDDS-1110
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HDDS-776/HDDS-972 changed when the om login code.

Now the OM#init() may invoke SCM#getScmInfo() without a login, which failed 
with the following. This ticket is opened to fix it.

 

{code}

ozoneManager_1  | java.io.IOException: DestHost:destPort scm:9863 , 
LocalHost:localPort om/172.19.0.4:0. Failed on local exception: 
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client 
cannot authenticate via:[KERBEROS]

ozoneManager_1  | at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

ozoneManager_1  | at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

ozoneManager_1  | at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

ozoneManager_1  | at 
java.lang.reflect.Constructor.newInstance(Constructor.java:423)

ozoneManager_1  | at 
org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)

ozoneManager_1  | at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)

ozoneManager_1  | at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)

ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1457)

ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1367)

ozoneManager_1  | at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)

ozoneManager_1  | at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)

ozoneManager_1  | at com.sun.proxy.$Proxy28.getScmInfo(Unknown Source)

ozoneManager_1  | at 
org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.getScmInfo(ScmBlockLocationProtocolClientSideTranslatorPB.java:154)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.lambda$getScmInfo$1(OzoneManager.java:910)

ozoneManager_1  | at 
org.apache.hadoop.utils.RetriableTask.call(RetriableTask.java:56)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.getScmInfo(OzoneManager.java:911)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:873)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:842)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:771)

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1107) Fix findbugs issues in DefaultCertificateClient#handleCase

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-1107.
--
Resolution: Fixed

> Fix findbugs issues in DefaultCertificateClient#handleCase
> --
>
> Key: HDDS-1107
> URL: https://issues.apache.org/jira/browse/HDDS-1107
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>
> {code}
> FindBugs :
>  
>    module:hadoop-hdds/common
>    Incompatible bit masks in (e & 0x2 == 0x1) yields a constant result in 
> org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.handleCase(DefaultCertificateClient$InitCase)
>  At DefaultCertificateClient.java:& 0x2 == 0x1) yields a constant result in 
> org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.handleCase(DefaultCertificateClient$InitCase)
>  At DefaultCertificateClient.java:[line 529]
>    Incompatible bit masks in (e & 0x4 == 0x1) yields a constant result in 
> org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.handleCase(DefaultCertificateClient$InitCase)
>  At DefaultCertificateClient.java:& 0x4 == 0x1) yields a constant result in 
> org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.handleCase(DefaultCertificateClient$InitCase)
>  At DefaultCertificateClient.java:[line 529]
>    Found reliance on default encoding in 
> org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.validateKeyPair(PublicKey):in
>  
> org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.validateKeyPair(PublicKey):
>  String.getBytes() At DefaultCertificateClient.java:[line 587]
>    Incompatible bit masks in (e & 0x2 == 0x1) yields a constant result in 
> org.apache.hadoop.hdds.security.x509.certificate.client.OMCertificateClient.handleCase(DefaultCertificateClient$InitCase)
>  At OMCertificateClient.java:& 0x2 == 0x1) yields a constant result in 
> org.apache.hadoop.hdds.security.x509.certificate.client.OMCertificateClient.handleCase(DefaultCertificateClient$InitCase)
>  At OMCertificateClient.java:[line 95]
>    Incompatible bit masks in (e & 0x4 == 0x1) yields a constant result in 
> org.apache.hadoop.hdds.security.x509.certificate.client.OMCertificateClient.handleCase(DefaultCertificateClient$InitCase)
>  At OMCertificateClient.java:& 0x4 == 0x1) yields a constant result in 
> org.apache.hadoop.hdds.security.x509.certificate.client.OMCertificateClient.handleCase(DefaultCertificateClient$InitCase)
>  At OMCertificateClient.java:[line 95]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1107) Fix findbugs issues in DefaultCertificateClient#handleCase

2019-02-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1107:


 Summary: Fix findbugs issues in DefaultCertificateClient#handleCase
 Key: HDDS-1107
 URL: https://issues.apache.org/jira/browse/HDDS-1107
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar


{code}
FindBugs :
 
   module:hadoop-hdds/common
   Incompatible bit masks in (e & 0x2 == 0x1) yields a constant result in 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.handleCase(DefaultCertificateClient$InitCase)
 At DefaultCertificateClient.java:& 0x2 == 0x1) yields a constant result in 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.handleCase(DefaultCertificateClient$InitCase)
 At DefaultCertificateClient.java:[line 529]
   Incompatible bit masks in (e & 0x4 == 0x1) yields a constant result in 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.handleCase(DefaultCertificateClient$InitCase)
 At DefaultCertificateClient.java:& 0x4 == 0x1) yields a constant result in 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.handleCase(DefaultCertificateClient$InitCase)
 At DefaultCertificateClient.java:[line 529]
   Found reliance on default encoding in 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.validateKeyPair(PublicKey):in
 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.validateKeyPair(PublicKey):
 String.getBytes() At DefaultCertificateClient.java:[line 587]
   Incompatible bit masks in (e & 0x2 == 0x1) yields a constant result in 
org.apache.hadoop.hdds.security.x509.certificate.client.OMCertificateClient.handleCase(DefaultCertificateClient$InitCase)
 At OMCertificateClient.java:& 0x2 == 0x1) yields a constant result in 
org.apache.hadoop.hdds.security.x509.certificate.client.OMCertificateClient.handleCase(DefaultCertificateClient$InitCase)
 At OMCertificateClient.java:[line 95]
   Incompatible bit masks in (e & 0x4 == 0x1) yields a constant result in 
org.apache.hadoop.hdds.security.x509.certificate.client.OMCertificateClient.handleCase(DefaultCertificateClient$InitCase)
 At OMCertificateClient.java:& 0x4 == 0x1) yields a constant result in 
org.apache.hadoop.hdds.security.x509.certificate.client.OMCertificateClient.handleCase(DefaultCertificateClient$InitCase)
 At OMCertificateClient.java:[line 95]
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767468#comment-16767468
 ] 

Xiaoyu Yao commented on HDDS-1038:
--

Thanks for the update, [~ajayydv]. One more comment:

 

SCMDatanodeProtocolServer.java

line 182: Do we need to call refreshServiceAcl for other two proto server such 
as SCMSecurityProtocolServer.java and SCMBlockProtocolServer.java in their 
constructor?

 

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1063) Implement OM init in secure cluster

2019-02-12 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-1063.
--
Resolution: Duplicate

> Implement OM init in secure cluster
> ---
>
> Key: HDDS-1063
> URL: https://issues.apache.org/jira/browse/HDDS-1063
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Implement OM init in secure cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   8   9   10   >