[jira] [Comment Edited] (HDDS-891) Create customized yetus personality for ozone

2019-02-15 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770031#comment-16770031
 ] 

Allen Wittenauer edited comment on HDDS-891 at 2/16/19 7:14 AM:


>From Precommit-HDDS-Admin:

{code}
https://gist.githubusercontent.com/elek/315f251b71bfb8d5f66e99eafbca7808/raw/a184384a5e13c345362fd15661584e5984886f51/ozone.sh
{code}

Why did you make this change despite the -1? 


was (Author: aw):

{code}
https://gist.githubusercontent.com/elek/315f251b71bfb8d5f66e99eafbca7808/raw/a184384a5e13c345362fd15661584e5984886f51/ozone.sh
{code}

Why did you make this change despite the -1? 

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2019-02-15 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770031#comment-16770031
 ] 

Allen Wittenauer commented on HDDS-891:
---


{code}
https://gist.githubusercontent.com/elek/315f251b71bfb8d5f66e99eafbca7808/raw/a184384a5e13c345362fd15661584e5984886f51/ozone.sh
{code}

Why did you make this change despite the -1? 

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1085) Create an OM API to serve snapshots to Recon server

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1085:
-
Labels: pull-request-available  (was: )

> Create an OM API to serve snapshots to Recon server
> ---
>
> Key: HDDS-1085
> URL: https://issues.apache.org/jira/browse/HDDS-1085
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1085-000.patch, HDDS-1085-001.patch, 
> HDDS-1085-002.patch
>
>
> We need to add an API to OM so that we can serve snapshots from the OM server.
>  - The snapshot should be streamed to fsck server with the ability to 
> throttle network utilization (like TransferFsImage)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1085) Create an OM API to serve snapshots to Recon server

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1085?focusedWorklogId=199551=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199551
 ]

ASF GitHub Bot logged work on HDDS-1085:


Author: ASF GitHub Bot
Created on: 16/Feb/19 06:46
Start Date: 16/Feb/19 06:46
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #494: HDDS-1085 : 
Create an OM API to serve snapshots to Recon server.
URL: https://github.com/apache/hadoop/pull/494
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 199551)
Time Spent: 10m
Remaining Estimate: 0h

> Create an OM API to serve snapshots to Recon server
> ---
>
> Key: HDDS-1085
> URL: https://issues.apache.org/jira/browse/HDDS-1085
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1085-000.patch, HDDS-1085-001.patch, 
> HDDS-1085-002.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to add an API to OM so that we can serve snapshots from the OM server.
>  - The snapshot should be streamed to fsck server with the ability to 
> throttle network utilization (like TransferFsImage)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770014#comment-16770014
 ] 

Hadoop QA commented on HDDS-1041:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 50s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 31s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
|   | hadoop.hdds.scm.node.TestDeadNodeHandler |
|   | hadoop.hdds.scm.chillmode.TestSCMChillModeManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1041 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958957/HDDS-1041.004.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  
shellcheck  |
| uname | Linux 44a33008cb81 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / dde0ab5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| shellcheck | v0.4.6 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2293/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2293/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2293/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2293/testReport/ |
| Max. process+thread count | 191 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/client hadoop-ozone/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager hadoop-ozone/s3gateway 
U: hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2293/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT  

[jira] [Commented] (HDDS-1085) Create an OM API to serve snapshots to Recon server

2019-02-15 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770009#comment-16770009
 ] 

Aravindan Vijayan commented on HDDS-1085:
-

[~anu] Thank you for the detailed review.

_DBCheckPointSnapShot#getCheckpointLocation – Return a path ?_
Yes, I agree that it is better to return a Path. The only reason I kept it as a 
String was that RocksDB checkpoint API needed a String. I can change this in 
this JIRA. 

_OMDbSnapshotServlet.java#doGet_ 
Yes I agree this may be a problem in the long term and also for large DB sizes. 
I can add the metric counters in a later patch which will give us a good 
understanding of any bottlenecks. 

_RDBCheckpointManager#createCheckpointSnapshot - I see we are reading the temp 
directory for the JVM env. but doesn't the checkpoint of RocksDB need/or is 
fast if it is on the same disk since it is able to hard link the SST and WAL 
files?_
 Yes, this is a good catch. I can go ahead and change the checkpointing 
location in the same director/disk as the OM RocksDB directory in the current 
JIRA itself. 

> Create an OM API to serve snapshots to Recon server
> ---
>
> Key: HDDS-1085
> URL: https://issues.apache.org/jira/browse/HDDS-1085
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1085-000.patch, HDDS-1085-001.patch, 
> HDDS-1085-002.patch
>
>
> We need to add an API to OM so that we can serve snapshots from the OM server.
>  - The snapshot should be streamed to fsck server with the ability to 
> throttle network utilization (like TransferFsImage)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1041:
-
Attachment: HDDS-1041.004.patch

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1041.001.patch, HDDS-1041.002.patch, 
> HDDS-1041.003.patch, HDDS-1041.004.patch, Ozone Encryption At-Rest - 
> V2019.2.7.pdf, Ozone Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770007#comment-16770007
 ] 

Xiaoyu Yao commented on HDDS-1041:
--

Upload patch v4 that fixed checkstyle and the failure in 
TestResultCodes.codeMapping.

 

Other three failures seem from HDDS-981. Will open separate ticket for the fix.

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1041.001.patch, HDDS-1041.002.patch, 
> HDDS-1041.003.patch, HDDS-1041.004.patch, Ozone Encryption At-Rest - 
> V2019.2.7.pdf, Ozone Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1121) Key read failure when data is written parallel in to Ozone

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769998#comment-16769998
 ] 

Hadoop QA commented on HDDS-1121:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m 
29s{color} | {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} root: The patch generated 6 new + 0 unchanged - 
0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 17s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 24s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1121 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958955/HDDS-1121.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 08fb95a8b850 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / dde0ab5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2292/artifact/out/patch-mvninstall-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2292/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2292/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2292/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2292/testReport/ |
| Max. process+thread count | 116 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/client hadoop-hdds/container-service 
hadoop-ozone/client hadoop-ozone/integration-test 
hadoop-ozone/objectstore-service U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2292/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Key read failure when data is written parallel in to 

[jira] [Updated] (HDDS-1121) Key read failure when data is written parallel in to Ozone

2019-02-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1121:
-
Status: Patch Available  (was: In Progress)

> Key read failure when data is written parallel in to Ozone
> --
>
> Key: HDDS-1121
> URL: https://issues.apache.org/jira/browse/HDDS-1121
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1121.00.patch
>
>
> When hive is run with multiple threads for data ingestion to ozone. After 
> ingestion is done, during read we see this below error.
> This issue is found during hive testing.
> {code:java}
> caused by: org.apache.hadoop.ozone.common.OzoneChecksumException: Checksum 
> mismatch at index 0
>  at 
> org.apache.hadoop.ozone.common.ChecksumData.verifyChecksumDataMatches(ChecksumData.java:143)
>  at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:239)
>  at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.seek(BlockInputStream.java:259)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.seek(KeyInputStream.java:249)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.seek(KeyInputStream.java:180)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFSInputStream.seek(OzoneFSInputStream.java:62)
>  at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:82)
>  at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
>  at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
>  at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:555)
>  at org.apache.orc.impl.ReaderImpl.(ReaderImpl.java:370)
>  at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:61)
>  at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:105)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1647)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1533)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2700(OrcInputFormat.java:1329)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1513)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1510)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1510)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1329)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1121) Key read failure when data is written parallel in to Ozone

2019-02-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1121 started by Bharat Viswanadham.

> Key read failure when data is written parallel in to Ozone
> --
>
> Key: HDDS-1121
> URL: https://issues.apache.org/jira/browse/HDDS-1121
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When hive is run with multiple threads for data ingestion to ozone. After 
> ingestion is done, during read we see this below error.
> This issue is found during hive testing.
> {code:java}
> caused by: org.apache.hadoop.ozone.common.OzoneChecksumException: Checksum 
> mismatch at index 0
>  at 
> org.apache.hadoop.ozone.common.ChecksumData.verifyChecksumDataMatches(ChecksumData.java:143)
>  at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:239)
>  at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.seek(BlockInputStream.java:259)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.seek(KeyInputStream.java:249)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.seek(KeyInputStream.java:180)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFSInputStream.seek(OzoneFSInputStream.java:62)
>  at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:82)
>  at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
>  at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
>  at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:555)
>  at org.apache.orc.impl.ReaderImpl.(ReaderImpl.java:370)
>  at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:61)
>  at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:105)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1647)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1533)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2700(OrcInputFormat.java:1329)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1513)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1510)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1510)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1329)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1121) Key read failure when data is written parallel in to Ozone

2019-02-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1121:
-
Description: 
When hive is run with multiple threads for data ingestion to ozone. After 
ingestion is done, during read we see this below error.

This issue is found during hive testing, and found by [~t3rmin4t0r]
{code:java}
caused by: org.apache.hadoop.ozone.common.OzoneChecksumException: Checksum 
mismatch at index 0
 at 
org.apache.hadoop.ozone.common.ChecksumData.verifyChecksumDataMatches(ChecksumData.java:143)
 at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:239)
 at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217)
 at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227)
 at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.seek(BlockInputStream.java:259)
 at 
org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.seek(KeyInputStream.java:249)
 at 
org.apache.hadoop.ozone.client.io.KeyInputStream.seek(KeyInputStream.java:180)
 at 
org.apache.hadoop.fs.ozone.OzoneFSInputStream.seek(OzoneFSInputStream.java:62)
 at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:82)
 at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
 at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
 at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:555)
 at org.apache.orc.impl.ReaderImpl.(ReaderImpl.java:370)
 at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:61)
 at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:105)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1647)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1533)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2700(OrcInputFormat.java:1329)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1513)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1510)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1510)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1329)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266){code}

  was:
When hive is run with multiple threads for data ingestion to ozone. After 
ingestion is done, during read we see this below error.

This issue is found during hive testing.
{code:java}
caused by: org.apache.hadoop.ozone.common.OzoneChecksumException: Checksum 
mismatch at index 0
 at 
org.apache.hadoop.ozone.common.ChecksumData.verifyChecksumDataMatches(ChecksumData.java:143)
 at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:239)
 at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217)
 at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227)
 at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.seek(BlockInputStream.java:259)
 at 
org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.seek(KeyInputStream.java:249)
 at 
org.apache.hadoop.ozone.client.io.KeyInputStream.seek(KeyInputStream.java:180)
 at 
org.apache.hadoop.fs.ozone.OzoneFSInputStream.seek(OzoneFSInputStream.java:62)
 at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:82)
 at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
 at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
 at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:555)
 at org.apache.orc.impl.ReaderImpl.(ReaderImpl.java:370)
 at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:61)
 at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:105)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1647)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1533)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2700(OrcInputFormat.java:1329)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1513)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1510)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 

[jira] [Updated] (HDDS-1121) Key read failure when data is written parallel in to Ozone

2019-02-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1121:
-
Attachment: HDDS-1121.00.patch

> Key read failure when data is written parallel in to Ozone
> --
>
> Key: HDDS-1121
> URL: https://issues.apache.org/jira/browse/HDDS-1121
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1121.00.patch
>
>
> When hive is run with multiple threads for data ingestion to ozone. After 
> ingestion is done, during read we see this below error.
> This issue is found during hive testing.
> {code:java}
> caused by: org.apache.hadoop.ozone.common.OzoneChecksumException: Checksum 
> mismatch at index 0
>  at 
> org.apache.hadoop.ozone.common.ChecksumData.verifyChecksumDataMatches(ChecksumData.java:143)
>  at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:239)
>  at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.seek(BlockInputStream.java:259)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.seek(KeyInputStream.java:249)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.seek(KeyInputStream.java:180)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFSInputStream.seek(OzoneFSInputStream.java:62)
>  at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:82)
>  at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
>  at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
>  at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:555)
>  at org.apache.orc.impl.ReaderImpl.(ReaderImpl.java:370)
>  at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:61)
>  at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:105)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1647)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1533)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2700(OrcInputFormat.java:1329)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1513)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1510)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1510)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1329)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1121) Key read failure when data is written parallel in to Ozone

2019-02-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1121:
-
Target Version/s: 0.4.0

> Key read failure when data is written parallel in to Ozone
> --
>
> Key: HDDS-1121
> URL: https://issues.apache.org/jira/browse/HDDS-1121
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1121.00.patch
>
>
> When hive is run with multiple threads for data ingestion to ozone. After 
> ingestion is done, during read we see this below error.
> This issue is found during hive testing.
> {code:java}
> caused by: org.apache.hadoop.ozone.common.OzoneChecksumException: Checksum 
> mismatch at index 0
>  at 
> org.apache.hadoop.ozone.common.ChecksumData.verifyChecksumDataMatches(ChecksumData.java:143)
>  at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:239)
>  at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.seek(BlockInputStream.java:259)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.seek(KeyInputStream.java:249)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.seek(KeyInputStream.java:180)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFSInputStream.seek(OzoneFSInputStream.java:62)
>  at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:82)
>  at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
>  at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
>  at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:555)
>  at org.apache.orc.impl.ReaderImpl.(ReaderImpl.java:370)
>  at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:61)
>  at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:105)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1647)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1533)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2700(OrcInputFormat.java:1329)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1513)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1510)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1510)
>  at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1329)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1121) Key read failure when data is written parallel in to Ozone

2019-02-15 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1121:


 Summary: Key read failure when data is written parallel in to Ozone
 Key: HDDS-1121
 URL: https://issues.apache.org/jira/browse/HDDS-1121
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


When hive is run with multiple threads for data ingestion to ozone. After 
ingestion is done, during read we see this below error.

This issue is found during hive testing.
{code:java}
caused by: org.apache.hadoop.ozone.common.OzoneChecksumException: Checksum 
mismatch at index 0
 at 
org.apache.hadoop.ozone.common.ChecksumData.verifyChecksumDataMatches(ChecksumData.java:143)
 at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:239)
 at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217)
 at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227)
 at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.seek(BlockInputStream.java:259)
 at 
org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.seek(KeyInputStream.java:249)
 at 
org.apache.hadoop.ozone.client.io.KeyInputStream.seek(KeyInputStream.java:180)
 at 
org.apache.hadoop.fs.ozone.OzoneFSInputStream.seek(OzoneFSInputStream.java:62)
 at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:82)
 at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
 at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
 at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:555)
 at org.apache.orc.impl.ReaderImpl.(ReaderImpl.java:370)
 at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:61)
 at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:105)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1647)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1533)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2700(OrcInputFormat.java:1329)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1513)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1510)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1510)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1329)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769974#comment-16769974
 ] 

Hadoop QA commented on HDDS-594:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 53s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  7s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-594 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958951/HDDS-594.01.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 125a1dcfe929 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / dde0ab5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2291/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2291/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2291/testReport/ |
| Max. process+thread count | 115 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-hdds/container-service U: hadoop-hdds |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2291/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: 

[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769963#comment-16769963
 ] 

Hanisha Koneru commented on HDDS-1053:
--

[~avijayan], I dont think there is a Jira to track the unit test failure in 
TestOzoneManagerRatisServer. We can open one for that.

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch, HDDS-1053-003.patch, HDDS-1053-004.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-02-15 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769961#comment-16769961
 ] 

Ajay Kumar commented on HDDS-594:
-

patch v1 to add all valid hostnames.

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-594.00.patch, HDDS-594.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-02-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-594:

Attachment: HDDS-594.01.patch

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-594.00.patch, HDDS-594.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1085) Create an OM API to serve snapshots to Recon server

2019-02-15 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769933#comment-16769933
 ] 

Anu Engineer commented on HDDS-1085:


[~avijayan] It is a very good patch, well written and very easy to understand. 
I have some very minor comments.
 # *DBCheckPointSnapShot#getCheckpointLocation* – Return a path ?
 # *OMDbSnapshotServlet.java#doGe*t - I understand that doing this inline is 
perhaps simpler than anything else. But we seem to be doing, one, 
checkpointing, two taring before we start the transfer. For DB sizes, in GBs it 
might be ok, but in the long run I am worried that we might start seeing client 
timeouts.
 ## To understand what is happening, it might be interesting to have 3 counters 
– or a map of counters.
 ### How much time are we taking for each CheckPoint
 ### How much time are we taking for each Tar operation – along with sizes
 ### How much time are we taking for the transfer.
 ## You don't have to do this in this patch, feel free to add that in a 
different patch. In the long run, if we have issues like client time out, this 
number will help us tune the client params. Also, at some point, we will have 
to do this in a background thread and just return when we are ready and not 
sync like this. But this is a great start. So let us go ahead and see what we 
can get out of this.
 # *OMDbSnapshotServlet.java#doGet* - Since we are using the TransferImage 
class, are we going to carry hadoop-hdfs Jar too ? Should we even consider 
moving this to hadoop-common? [~xyao], [~elek], [~bharatviswa]
 # OmUtils.java- check if we have this Tarfile code already in Ozone. I think 
we have something like this already [~elek] ?
 # *OmUtils.java#addFilesToArchive* – In the recursive call we seem to pass 
_cFile.getAbsolutePath_, is that expected? or should the archive contain 
relative paths?
 # *RDBCheckpointManager#createCheckpointSnapshot* - I see we are reading the 
temp directory for the JVM env. but doesn't the checkpoint of RocksDB need/or 
is fast if it is on the same disk since it is able to hard link the SST and WAL 
files? Just wanted to make sure that my understanding is not busted.

 

 

> Create an OM API to serve snapshots to Recon server
> ---
>
> Key: HDDS-1085
> URL: https://issues.apache.org/jira/browse/HDDS-1085
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1085-000.patch, HDDS-1085-001.patch, 
> HDDS-1085-002.patch
>
>
> We need to add an API to OM so that we can serve snapshots from the OM server.
>  - The snapshot should be streamed to fsck server with the ability to 
> throttle network utilization (like TransferFsImage)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14130) Make ZKFC ObserverNode aware

2019-02-15 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769937#comment-16769937
 ] 

Konstantin Shvachko commented on HDFS-14130:


Looked at {{TestDFSZKFailoverController}} failure. The problem is that it waits 
for manual confirmation for {{"-forcemanual"}} to be typed on the console, but 
never receives it. You might want to follow the pattern of {{TestDFSHAAdmin}}, 
which also uses {{"-forcemanual"}}.
If I type "Y" in the debugger the test passes.

> Make ZKFC ObserverNode aware
> 
>
> Key: HDFS-14130
> URL: https://issues.apache.org/jira/browse/HDFS-14130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: xiangheng
>Priority: Major
> Attachments: HDFS-14130-HDFS-12943.001.patch, 
> HDFS-14130-HDFS-12943.003.patch, HDFS-14130-HDFS-12943.004.patch, 
> HDFS-14130-HDFS-12943.005.patch, HDFS-14130-HDFS-12943.006.patch, 
> HDFS-14130-HDFS-12943.007.patch
>
>
> Need to fix automatic failover with ZKFC. Currently it does not know about 
> ObserverNodes trying to convert them to SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1084) Ozone Recon server

2019-02-15 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1084:
---
Summary: Ozone Recon server  (was: Ozone FSCK server)

> Ozone Recon server
> --
>
> Key: HDDS-1084
> URL: https://issues.apache.org/jira/browse/HDDS-1084
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: fsck
>Affects Versions: 0.4.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>
> Fsck Server at a high level will maintain a global view of Ozone that is not 
> available from SCM or OM. Things like how many volumes exist; and how many 
> buckets exist per volume; which volume has maximum buckets; which are buckets 
> that have not been accessed for a year, which are the corrupt blocks, which 
> are blocks on data nodes which are not used; and answer similar queries.
> I will work on a design document and attach it in a few days.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1084) Ozone Recon server

2019-02-15 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1084:
---
Description: 
Recon Server at a high level will maintain a global view of Ozone that is not 
available from SCM or OM. Things like how many volumes exist; and how many 
buckets exist per volume; which volume has maximum buckets; which are buckets 
that have not been accessed for a year, which are the corrupt blocks, which are 
blocks on data nodes which are not used; and answer similar queries.

I will work on a design document and attach it in a few days.

  was:
Fsck Server at a high level will maintain a global view of Ozone that is not 
available from SCM or OM. Things like how many volumes exist; and how many 
buckets exist per volume; which volume has maximum buckets; which are buckets 
that have not been accessed for a year, which are the corrupt blocks, which are 
blocks on data nodes which are not used; and answer similar queries.

I will work on a design document and attach it in a few days.


> Ozone Recon server
> --
>
> Key: HDDS-1084
> URL: https://issues.apache.org/jira/browse/HDDS-1084
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: fsck
>Affects Versions: 0.4.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>
> Recon Server at a high level will maintain a global view of Ozone that is not 
> available from SCM or OM. Things like how many volumes exist; and how many 
> buckets exist per volume; which volume has maximum buckets; which are buckets 
> that have not been accessed for a year, which are the corrupt blocks, which 
> are blocks on data nodes which are not used; and answer similar queries.
> I will work on a design document and attach it in a few days.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1085) Create an OM API to serve snapshots to Recon server

2019-02-15 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769933#comment-16769933
 ] 

Anu Engineer edited comment on HDDS-1085 at 2/16/19 1:12 AM:
-

[~avijayan] It is a very good patch, well written and very easy to understand. 
I have some very minor comments.
 # *DBCheckPointSnapShot#getCheckpointLocation* – Return a path ?
 # *OMDbSnapshotServlet.java#doGet* - I understand that doing this inline is 
perhaps simpler than anything else. But we seem to be doing, one, 
checkpointing, two taring before we start the transfer. For DB sizes, in GBs it 
might be ok, but in the long run I am worried that we might start seeing client 
timeouts.
 ## To understand what is happening, it might be interesting to have 3 counters 
– or a map of counters.
 ### How much time are we taking for each CheckPoint
 ### How much time are we taking for each Tar operation – along with sizes
 ### How much time are we taking for the transfer.
 ## You don't have to do this in this patch, feel free to add that in a 
different patch. In the long run, if we have issues like client time out, this 
number will help us tune the client params. Also, at some point, we will have 
to do this in a background thread and just return when we are ready and not 
sync like this. But this is a great start. So let us go ahead and see what we 
can get out of this.
 # *OMDbSnapshotServlet.java#doGet* - Since we are using the TransferImage 
class, are we going to carry hadoop-hdfs Jar too ? Should we even consider 
moving this to hadoop-common? [~xyao], [~elek], [~bharatviswa]
 # OmUtils.java- check if we have this Tarfile code already in Ozone. I think 
we have something like this already [~elek] ?
 # *OmUtils.java#addFilesToArchive* – In the recursive call we seem to pass 
_cFile.getAbsolutePath_, is that expected? or should the archive contain 
relative paths?
 # *RDBCheckpointManager#createCheckpointSnapshot* - I see we are reading the 
temp directory for the JVM env. but doesn't the checkpoint of RocksDB need/or 
is fast if it is on the same disk since it is able to hard link the SST and WAL 
files? Just wanted to make sure that my understanding is not busted.

 

 


was (Author: anu):
[~avijayan] It is a very good patch, well written and very easy to understand. 
I have some very minor comments.
 # *DBCheckPointSnapShot#getCheckpointLocation* – Return a path ?
 # *OMDbSnapshotServlet.java#doGe*t - I understand that doing this inline is 
perhaps simpler than anything else. But we seem to be doing, one, 
checkpointing, two taring before we start the transfer. For DB sizes, in GBs it 
might be ok, but in the long run I am worried that we might start seeing client 
timeouts.
 ## To understand what is happening, it might be interesting to have 3 counters 
– or a map of counters.
 ### How much time are we taking for each CheckPoint
 ### How much time are we taking for each Tar operation – along with sizes
 ### How much time are we taking for the transfer.
 ## You don't have to do this in this patch, feel free to add that in a 
different patch. In the long run, if we have issues like client time out, this 
number will help us tune the client params. Also, at some point, we will have 
to do this in a background thread and just return when we are ready and not 
sync like this. But this is a great start. So let us go ahead and see what we 
can get out of this.
 # *OMDbSnapshotServlet.java#doGet* - Since we are using the TransferImage 
class, are we going to carry hadoop-hdfs Jar too ? Should we even consider 
moving this to hadoop-common? [~xyao], [~elek], [~bharatviswa]
 # OmUtils.java- check if we have this Tarfile code already in Ozone. I think 
we have something like this already [~elek] ?
 # *OmUtils.java#addFilesToArchive* – In the recursive call we seem to pass 
_cFile.getAbsolutePath_, is that expected? or should the archive contain 
relative paths?
 # *RDBCheckpointManager#createCheckpointSnapshot* - I see we are reading the 
temp directory for the JVM env. but doesn't the checkpoint of RocksDB need/or 
is fast if it is on the same disk since it is able to hard link the SST and WAL 
files? Just wanted to make sure that my understanding is not busted.

 

 

> Create an OM API to serve snapshots to Recon server
> ---
>
> Key: HDDS-1085
> URL: https://issues.apache.org/jira/browse/HDDS-1085
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1085-000.patch, HDDS-1085-001.patch, 
> HDDS-1085-002.patch
>
>
> We need to add an API to OM so that we can serve snapshots from the OM server.
>  - The snapshot should be streamed to fsck server with the ability to 
> throttle network 

[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769911#comment-16769911
 ] 

Hudson commented on HDFS-14258:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15981 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15981/])
HDFS-14258. Introduce Java Concurrent Package To DataXceiverServer (inigoiri: 
rev dde0ab55aadcf7c9cf71dbe36d90e97da6bc9498)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeReconfiguration.java


> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch, HDFS-14258.9.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769903#comment-16769903
 ] 

Íñigo Goiri commented on HDFS-14258:


Thanks [~belugabehr] for the patch and dealing with the review.
Commited to trunk.

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch, HDFS-14258.9.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14258:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch, HDFS-14258.9.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769882#comment-16769882
 ] 

Hadoop QA commented on HDDS-1041:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m  3s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.exceptions.TestResultCodes |
|   | hadoop.hdds.scm.chillmode.TestSCMChillModeManager |
|   | hadoop.hdds.scm.node.TestDeadNodeHandler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1041 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958944/HDDS-1041.003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  
shellcheck  |
| uname | Linux 725a41211413 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / afe126d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| shellcheck | v0.4.6 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2290/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2290/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2290/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2290/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2290/testReport/ |
| Max. process+thread count | 215 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/client hadoop-ozone/common hadoop-ozone/dist 
hadoop-ozone/integration-test 

[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769879#comment-16769879
 ] 

BELUGA BEHR commented on HDFS-14258:


It took me several tries, even locally, but I did eventually get success:

{code}
user@apache-dev:~/hadoop/hadoop$ mvn clean test 
-Dtest=TestNameNodeMetadataConsistency,TestDataNodeLifeline,TestDataNodeVolumeFailure,TestUnderReplicatedBlocks,TestJournalNodeSync,TestDFSClientRetries,TestBPOfferService

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 13:39 min
[INFO] Finished at: 2019-02-15T18:50:50-05:00
[INFO] Final Memory: 398M/1516M
[INFO] 
{code}

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch, HDFS-14258.9.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769867#comment-16769867
 ] 

Xiaoyu Yao commented on HDDS-1041:
--

Rebase the patch to trunk.

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1041.001.patch, HDDS-1041.002.patch, 
> HDDS-1041.003.patch, Ozone Encryption At-Rest - V2019.2.7.pdf, Ozone 
> Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1041:
-
Attachment: HDDS-1041.003.patch

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1041.001.patch, HDDS-1041.002.patch, 
> HDDS-1041.003.patch, Ozone Encryption At-Rest - V2019.2.7.pdf, Ozone 
> Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14111) hdfsOpenFile on HDFS causes unnecessary IO from file offset 0

2019-02-15 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769858#comment-16769858
 ] 

Sahil Takiar commented on HDFS-14111:
-

I can't attach patches to this JIRA since its not assigned to me, so I put the 
patch in a gist link: 
https://gist.github.com/sahilTakiar/4e8cbd47beb7324501c371e61e1eb8d7

The patch extends the {{StreamCapabilities}} class to support 
{{ByteBufferReadable}} and replaces the {{readDirect}} call in {{hdfsOpenFile}} 
with a JNI that call to the {{StreamCapabilities#hasCapability}}.

If we are ok with this approach, I would like to wait until HDFS-14267 has been 
merged before submitting this patch, so that we can get some test coverage for 
this change.

> hdfsOpenFile on HDFS causes unnecessary IO from file offset 0
> -
>
> Key: HDFS-14111
> URL: https://issues.apache.org/jira/browse/HDFS-14111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs
>Affects Versions: 3.2.0
>Reporter: Todd Lipcon
>Priority: Major
>
> hdfsOpenFile() calls readDirect() with a 0-length argument in order to check 
> whether the underlying stream supports bytebuffer reads. With DFSInputStream, 
> the read(0) isn't short circuited, and results in the DFSClient opening a 
> block reader. In the case of a remote block, the block reader will actually 
> issue a read of the whole block, causing the datanode to perform unnecessary 
> IO and network transfers in order to fill up the client's TCP buffers. This 
> causes performance degradation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769835#comment-16769835
 ] 

Aravindan Vijayan commented on HDDS-1053:
-

Checkstyle issues are fixed. 

Although there is a unit test failure in 
TestOzoneManagerRatisServer.testSubmitRatisRequest, it is present in the base 
branch. [~hanishakoneru] Do you happen to know if there is a JIRA to track that 
unit test failure? 

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch, HDDS-1053-003.patch, HDDS-1053-004.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-15 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769831#comment-16769831
 ] 

Ajay Kumar commented on HDDS-1101:
--

[~anu] can we add api to get a stored certificates based on serial id. 
[HDDS-1060] will use it to fetch the certificate.

> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1101.000.patch, HDDS-1101.001.patch
>
>
> Make SCM CA write to the Metadata layer of SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-02-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769806#comment-16769806
 ] 

Íñigo Goiri commented on HDFS-14284:


We are getting the remote exception as this:
{code}
Exception in thread "main" 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
available under nameservice BN2
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:309)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:464)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:471)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:367)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:734)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.getFileInfo(RouterClientProtocol.java:699)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:731)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:881)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2621)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category READ is not supported in state standby. Visit 
https://s.apache.org/sbnn-error
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2040)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1449)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3076)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1127)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:881)
{code}

Maybe we can extend RemoteException to include the source (e.g., IP) of the 
exception.

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-02-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769806#comment-16769806
 ] 

Íñigo Goiri edited comment on HDFS-14284 at 2/15/19 10:33 PM:
--

We are getting the remote exception as this:
{code}
Exception in thread "main" 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
available under nameservice ns0
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:309)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:464)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:471)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:367)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:734)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.getFileInfo(RouterClientProtocol.java:699)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:731)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:881)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2621)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category READ is not supported in state standby. Visit 
https://s.apache.org/sbnn-error
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2040)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1449)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3076)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1127)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:881)
{code}

Maybe we can extend RemoteException to include the source (e.g., IP) of the 
exception.


was (Author: elgoiri):
We are getting the remote exception as this:
{code}
Exception in thread "main" 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
available under nameservice BN2
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:309)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:464)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:471)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:367)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:734)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.getFileInfo(RouterClientProtocol.java:699)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:731)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:881)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2621)
Caused by: 

[jira] [Commented] (HDFS-14111) hdfsOpenFile on HDFS causes unnecessary IO from file offset 0

2019-02-15 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769821#comment-16769821
 ] 

Sahil Takiar commented on HDFS-14111:
-

For Todd's first suggestion. I think something like that already exists - 
{{o.a.h.fs.StreamCapabilities}} seems to provide this functionality (someone 
correct me if I am wrong). Essentially, it defines what interfaces a stream 
implements. Right now it has support for {{Syncable}}, {{CanUnbuffer}}, 
{{CanSetReadahead}}, and {{CanSetDropBehind}}. We would just need to add 
support for {{ByteBufferReadable}} and then libhdfs can call 
{{StreamCapabilities#hasCapability}} to determine if the underlying stream 
supports {{readDirect}}.

If my approach makes sense, I can start working on a patch.

> hdfsOpenFile on HDFS causes unnecessary IO from file offset 0
> -
>
> Key: HDFS-14111
> URL: https://issues.apache.org/jira/browse/HDFS-14111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs
>Affects Versions: 3.2.0
>Reporter: Todd Lipcon
>Priority: Major
>
> hdfsOpenFile() calls readDirect() with a 0-length argument in order to check 
> whether the underlying stream supports bytebuffer reads. With DFSInputStream, 
> the read(0) isn't short circuited, and results in the DFSClient opening a 
> block reader. In the case of a remote block, the block reader will actually 
> issue a read of the whole block, causing the datanode to perform unnecessary 
> IO and network transfers in order to fill up the client's TCP buffers. This 
> causes performance degradation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769817#comment-16769817
 ] 

Íñigo Goiri commented on HDFS-14258:


We will ignore the checkstyle and I believe the failed unit tests are unrelated.
To be safe [~belugabehr] do you mind verifying the unit tests?
Previous runs don't show the issue and I don't see anything that would trigger 
this.

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch, HDFS-14258.9.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-15 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769815#comment-16769815
 ] 

Ajay Kumar commented on HDDS-1101:
--

Had a offline discussion with [~anu], we can skip first 2 comments as scm id 
already added to cert DN.

> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1101.000.patch, HDDS-1101.001.patch
>
>
> Make SCM CA write to the Metadata layer of SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-02-15 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769816#comment-16769816
 ] 

Erik Krogen commented on HDFS-14284:


Makes sense to me. IIRC that information may not be available where the 
{{RemoteException}} is created, but if it is, +1.

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14285) libhdfs hdfsRead copies entire array even if its only partially filled

2019-02-15 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769801#comment-16769801
 ] 

Sahil Takiar commented on HDFS-14285:
-

Attaching a fix, which is essentially to use the return value of 
{{#read(bytes[])}} as the input for {{GetByteArrayRegion}}. Note that 
{{hdfsPread}} (which is very similar to {{hdfsRead}}) follows the behavior 
defined in this patch.

I would like to wait until HDFS-14267 and HDFS-3246 are merged before 
proceeding with this patch so we can take advantage of the additional pread 
tests added in the aforementioned JIRAs.

> libhdfs hdfsRead copies entire array even if its only partially filled
> --
>
> Key: HDFS-14285
> URL: https://issues.apache.org/jira/browse/HDFS-14285
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14285.001.patch
>
>
> There is a bug in libhdfs {{hdfsRead}}
> {code:java}
> jthr = invokeMethod(env, , INSTANCE, jInputStream, HADOOP_ISTRM,
>"read", "([B)I", jbRarray);
> if (jthr) {
> destroyLocalReference(env, jbRarray);
> errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
> "hdfsRead: FSDataInputStream#read");
> return -1;
> }
> if (jVal.i < 0) {
> // EOF
> destroyLocalReference(env, jbRarray);
> return 0;
> } else if (jVal.i == 0) {
> destroyLocalReference(env, jbRarray);
> errno = EINTR;
> return -1;
> }
> (*env)->GetByteArrayRegion(env, jbRarray, 0, noReadBytes, buffer);
> {code}
> The method makes a call to {{FSInputStream#read(byte[])}} to fill in the Java 
> byte array, however, {{#read(byte[])}} is not guaranteed to fill up the 
> entire array, instead it returns the number of bytes written to the array 
> (which could be less than the size of the array). Yet `{{GetByteArrayRegion}} 
> decides to copy the entire contents of the {{jbArray}} into the buffer 
> ({{noReadBytes}} is initialized to the length of the buffer and is never 
> updated). So if {{FSInputStream#read(byte[])}} decides to read less data than 
> the size of the byte array, the call to {{GetByteArrayRegion}} will 
> essentially copy more bytes than necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14285) libhdfs hdfsRead copies entire array even if its only partially filled

2019-02-15 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14285:

Attachment: HDFS-14285.001.patch

> libhdfs hdfsRead copies entire array even if its only partially filled
> --
>
> Key: HDFS-14285
> URL: https://issues.apache.org/jira/browse/HDFS-14285
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14285.001.patch
>
>
> There is a bug in libhdfs {{hdfsRead}}
> {code:java}
> jthr = invokeMethod(env, , INSTANCE, jInputStream, HADOOP_ISTRM,
>"read", "([B)I", jbRarray);
> if (jthr) {
> destroyLocalReference(env, jbRarray);
> errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
> "hdfsRead: FSDataInputStream#read");
> return -1;
> }
> if (jVal.i < 0) {
> // EOF
> destroyLocalReference(env, jbRarray);
> return 0;
> } else if (jVal.i == 0) {
> destroyLocalReference(env, jbRarray);
> errno = EINTR;
> return -1;
> }
> (*env)->GetByteArrayRegion(env, jbRarray, 0, noReadBytes, buffer);
> {code}
> The method makes a call to {{FSInputStream#read(byte[])}} to fill in the Java 
> byte array, however, {{#read(byte[])}} is not guaranteed to fill up the 
> entire array, instead it returns the number of bytes written to the array 
> (which could be less than the size of the array). Yet `{{GetByteArrayRegion}} 
> decides to copy the entire contents of the {{jbArray}} into the buffer 
> ({{noReadBytes}} is initialized to the length of the buffer and is never 
> updated). So if {{FSInputStream#read(byte[])}} decides to read less data than 
> the size of the byte array, the call to {{GetByteArrayRegion}} will 
> essentially copy more bytes than necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14285) libhdfs hdfsRead copies entire array even if its only partially filled

2019-02-15 Thread Sahil Takiar (JIRA)
Sahil Takiar created HDFS-14285:
---

 Summary: libhdfs hdfsRead copies entire array even if its only 
partially filled
 Key: HDFS-14285
 URL: https://issues.apache.org/jira/browse/HDFS-14285
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, libhdfs, native
Reporter: Sahil Takiar


There is a bug in libhdfs {{hdfsRead}}
{code:java}
jthr = invokeMethod(env, , INSTANCE, jInputStream, HADOOP_ISTRM,
   "read", "([B)I", jbRarray);
if (jthr) {
destroyLocalReference(env, jbRarray);
errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
"hdfsRead: FSDataInputStream#read");
return -1;
}
if (jVal.i < 0) {
// EOF
destroyLocalReference(env, jbRarray);
return 0;
} else if (jVal.i == 0) {
destroyLocalReference(env, jbRarray);
errno = EINTR;
return -1;
}
(*env)->GetByteArrayRegion(env, jbRarray, 0, noReadBytes, buffer);
{code}
The method makes a call to {{FSInputStream#read(byte[])}} to fill in the Java 
byte array, however, {{#read(byte[])}} is not guaranteed to fill up the entire 
array, instead it returns the number of bytes written to the array (which could 
be less than the size of the array). Yet `{{GetByteArrayRegion}} decides to 
copy the entire contents of the {{jbArray}} into the buffer ({{noReadBytes}} is 
initialized to the length of the buffer and is never updated). So if 
{{FSInputStream#read(byte[])}} decides to read less data than the size of the 
byte array, the call to {{GetByteArrayRegion}} will essentially copy more bytes 
than necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-15 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769790#comment-16769790
 ] 

Ajay Kumar commented on HDDS-1101:
--

[~anu] thanks for the patch. LGTM. Few minor comments:

SCM
* L523 Shall we add scm id as suffix. i.e "scm-@hostname". 
* Also do we need any validation for hostname used in subject.Ideally in most 
of the cases it will work fine but may cause an error in some cases when 
hostname is not configured properly.
* L538 Shall we add another resultcode for CA initialization failure? something 
like "SCM_CA_INITIALIZATION"


Seems this is initial patch, are we planning to add jira specific unit tests in 
following patch?


> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1101.000.patch, HDDS-1101.001.patch
>
>
> Make SCM CA write to the Metadata layer of SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769795#comment-16769795
 ] 

Hadoop QA commented on HDFS-14258:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 
478 unchanged - 4 fixed = 478 total (was 482) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 168 unchanged - 7 fixed = 172 total (was 175) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14258 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958924/HDFS-14258.9.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 52b184780f68 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d10444e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | 

[jira] [Updated] (HDDS-1012) Add Default CertificateClient implementation

2019-02-15 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1012:
---
Fix Version/s: 0.4.0

> Add Default CertificateClient implementation
> 
>
> Key: HDDS-1012
> URL: https://issues.apache.org/jira/browse/HDDS-1012
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Blocker
> Fix For: 0.4.0
>
> Attachments: HDDS-1012.01.patch, HDDS-1012.02.patch, 
> HDDS-1012.03.patch, HDDS-1012.04.patch, HDDS-1012.05.patch, 
> HDDS-1012.06.patch, HDDS-1012.07.patch, HDDS-1012.08.patch, HDDS-1012.09.patch
>
>
> Add Default CertificateClient implementation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3246) pRead equivalent for direct read path

2019-02-15 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769788#comment-16769788
 ] 

Sahil Takiar commented on HDFS-3246:


I guess I don't have permissions to attach files to this JIRA, I've put the 
patch into a gist for reference: 
[https://gist.github.com/sahilTakiar/4c24fb2980e9fa2787f519ea04d1c6d9]

A summary of the changes:
 * Adds a new interface called {{ByteBufferPositionedReadable}} with a single 
method: {{int read(long position, ByteBuffer buf)}}
 * Adds the interface to {{FSDataInputStream}} and {{DFSInputStream}} - 
integration with {{DFSInputStream}} is pretty simple straightforward because 
the {{pread}} method already uses {{ByteBuffer}}
 * Adds Java unit tests to make sure the new interface works correctly with 
{{DFSInputStream}}
 * Adds integration to libhdfs via a new helper method call {{preadDirect}} 
which works in a similar manner to {{readDirect}}
 * Adds new unit tests for the changes to libhdfs, however, the changes are 
dependent on getting HDFS-14267 merged first

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Chen Zhang
>Priority: Major
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-15 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HDFS-14268:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch, HDFS-14268-HDFS-13891.002.patch, 
> HDFS-14268-HDFS-13891.003.patch, HDFS-14268-HDFS-13891.004.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-15 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769589#comment-16769589
 ] 

Giovanni Matteo Fumarola commented on HDFS-14268:
-

Thanks [~elgoiri] for working on this and [~tasanuma0829] for the review.
Committed to the branch

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch, HDFS-14268-HDFS-13891.002.patch, 
> HDFS-14268-HDFS-13891.003.patch, HDFS-14268-HDFS-13891.004.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-02-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769785#comment-16769785
 ] 

Ayush Saxena commented on HDFS-13853:
-

Interestingly if we now start updating rather than overwriting, if we change 
the order keeping the destinations same. Wouldn't that bring up inconsistencies?

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3246) pRead equivalent for direct read path

2019-02-15 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769783#comment-16769783
 ] 

Sahil Takiar commented on HDFS-3246:


[~zhangchen] I actually started working on this last week and have a patch 
tested and ready to go. Will attach it to this JIRA. Are you actively working 
on this as well?

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Chen Zhang
>Priority: Major
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14249) RBF: Tooling to identify the subcluster location of a file

2019-02-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769781#comment-16769781
 ] 

Ayush Saxena commented on HDFS-14249:
-

Thanx [~elgoiri] for taking this up. Seems good to have such an API.

Some comments :
 * Please Add the newly added command in the documentation.
 * A comment here explaining the reason for the same should be good.The 
scenario which this seems to be handling.

{code:java}
+} catch (Exception e) {
+  LOG.error("Cannot get create location for {}", src);
+  List locs =
+  this.router.getRpcServer().getLocationsForPath(src, false);
+  if (locs != null && !locs.isEmpty()) {
+loc = locs.get(0);
+  }
{code}
Barring file.txt (The first in the test) all lands in the exception area only.
 * A minor doubt, Will this work for directories too? I feel this should. If it 
is intended too then rather than sending the only one we should send all 
destinations and if it is not intended to work for directory we should add a 
check for the same to avoid giving back single result for the same, if that 
seems to be not there. If am not missing something.

> RBF: Tooling to identify the subcluster location of a file
> --
>
> Key: HDFS-14249
> URL: https://issues.apache.org/jira/browse/HDFS-14249
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14249-HDFS-13891.000.patch, 
> HDFS-14249-HDFS-13891.001.patch
>
>
> Mount points can spread files across multiple subclusters depennding on a 
> policy (e.g., HASH, HASH_ALL). Administrators would need a way to identify 
> the location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769757#comment-16769757
 ] 

Hadoop QA commented on HDDS-594:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  1s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 59s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-594 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958935/HDDS-594.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux c070c85f3821 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / afe126d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2289/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2289/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2289/testReport/ |
| Max. process+thread count | 131 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-hdds/container-service U: hadoop-hdds |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2289/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: 

[jira] [Work logged] (HDDS-1116) Add java profiler servlet to the Ozone web servers

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1116?focusedWorklogId=199436=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199436
 ]

ASF GitHub Bot logged work on HDDS-1116:


Author: ASF GitHub Bot
Created on: 15/Feb/19 21:19
Start Date: 15/Feb/19 21:19
Worklog Time Spent: 10m 
  Work Description: prasanthj commented on pull request #491: HDDS-1116. 
Add java profiler servlet to the Ozone web servers
URL: https://github.com/apache/hadoop/pull/491#discussion_r257394705
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
 ##
 @@ -18,6 +18,7 @@ version: "3"
 services:
datanode:
   image: apache/hadoop-runner
+  privileged: true #required by the profiler
 
 Review comment:
   Sorry, I missed the part that it is running using docker-compose and not 
k8s. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 199436)
Time Spent: 0.5h  (was: 20m)

> Add java profiler servlet to the Ozone web servers
> --
>
> Key: HDDS-1116
> URL: https://issues.apache.org/jira/browse/HDDS-1116
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Thanks to [~gopalv] we learned that [~prasanth_j] implemented a helper 
> servlet in Hive to initialize new [async 
> profiler|https://github.com/jvm-profiling-tools/async-profiler] sessions and 
> provide the svg based flame graph over HTTP. (see HIVE-20202)
> It seems to very useful as with this approach the profiling could be very 
> easy.
> This patch imports the servlet from the Hive code base to the Ozone code base 
> with minor modification (to make it work with our servlet containers)
>  * The two servlets are unified to one
>  * Streaming the svg to the browser based on IOUtils.copy 
>  * Output message is improved
> By default the profile servlet is turned off, but you can enable it with 
> 'hdds.profiler.endpoint.enabled=true' ozone-site.xml settings. In that case 
> you can access the /prof endpoint from scm,om,s3g. 
> You should upload the async profiler first 
> (https://github.com/jvm-profiling-tools/async-profiler) and set the 
> ASYNC_PROFILER_HOME environment variable to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1116) Add java profiler servlet to the Ozone web servers

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1116?focusedWorklogId=199435=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199435
 ]

ASF GitHub Bot logged work on HDDS-1116:


Author: ASF GitHub Bot
Created on: 15/Feb/19 21:18
Start Date: 15/Feb/19 21:18
Worklog Time Spent: 10m 
  Work Description: prasanthj commented on pull request #491: HDDS-1116. 
Add java profiler servlet to the Ozone web servers
URL: https://github.com/apache/hadoop/pull/491#discussion_r257394453
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
 ##
 @@ -18,6 +18,7 @@ version: "3"
 services:
datanode:
   image: apache/hadoop-runner
+  privileged: true #required by the profiler
 
 Review comment:
   This can be avoided with initContainer running in privileged mode that 
updates the following
   ```
   sudo bash -c 'echo 1 > /proc/sys/kernel/perf_event_paranoid'
   sudo bash -c 'echo 0 > /proc/sys/kernel/kptr_restrict'
   ```
   with this initContainer will apply the required changes and will complete. 
main container can still run in non-previleged mode. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 199435)
Time Spent: 20m  (was: 10m)

> Add java profiler servlet to the Ozone web servers
> --
>
> Key: HDDS-1116
> URL: https://issues.apache.org/jira/browse/HDDS-1116
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Thanks to [~gopalv] we learned that [~prasanth_j] implemented a helper 
> servlet in Hive to initialize new [async 
> profiler|https://github.com/jvm-profiling-tools/async-profiler] sessions and 
> provide the svg based flame graph over HTTP. (see HIVE-20202)
> It seems to very useful as with this approach the profiling could be very 
> easy.
> This patch imports the servlet from the Hive code base to the Ozone code base 
> with minor modification (to make it work with our servlet containers)
>  * The two servlets are unified to one
>  * Streaming the svg to the browser based on IOUtils.copy 
>  * Output message is improved
> By default the profile servlet is turned off, but you can enable it with 
> 'hdds.profiler.endpoint.enabled=true' ozone-site.xml settings. In that case 
> you can access the /prof endpoint from scm,om,s3g. 
> You should upload the async profiler first 
> (https://github.com/jvm-profiling-tools/async-profiler) and set the 
> ASYNC_PROFILER_HOME environment variable to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-02-15 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769714#comment-16769714
 ] 

Erik Krogen commented on HDFS-14284:


Interesting problem. Can you maybe post a sample stack trace now, and what you 
hope for it to look like in the future? In particular, I am curious if you're 
thinking specifically of a {{RemoteException}}, or something else. The 
{{RemoteException}} may be a good place to store such information. It wouldn't 
catch IO exceptions, but I think these typically log their destination address 
anyhow.

Agreed that something like this can be useful for Observer Nodes as well.

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-02-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-594:

Status: Patch Available  (was: Open)

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-594.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-02-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-594:

Attachment: HDDS-594.00.patch

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-594.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-15 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769719#comment-16769719
 ] 

Anu Engineer commented on HDDS-1041:


Sorry, I got tot this so late. This is not applying any more, can you please 
rebase this when you get a chance.

 

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1041.001.patch, HDDS-1041.002.patch, Ozone 
> Encryption At-Rest - V2019.2.7.pdf, Ozone Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1114) Fix findbugs/checkstyle/accepteance errors in Ozone

2019-02-15 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1114:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~bharatviswa] Thanks for the commit. [~elek] Thanks for the contribution. This 
patch has been committed to the trunk.

> Fix findbugs/checkstyle/accepteance errors in Ozone
> ---
>
> Key: HDDS-1114
> URL: https://issues.apache.org/jira/browse/HDDS-1114
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Unfortunately as the previous two big commits (error handling HDDS-1068, 
> checkstyle HDDS-1103) are committed in the same time a few new errors are 
> introduced during the rebase.
> This patch will fix the remaining 5 issues (+ a type in the acceptance test 
> executor) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1085) Create an OM API to serve snapshots to Recon server

2019-02-15 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1085:
---
Summary: Create an OM API to serve snapshots to Recon server  (was: Create 
an OM API to serve snapshots to FSCK server)

> Create an OM API to serve snapshots to Recon server
> ---
>
> Key: HDDS-1085
> URL: https://issues.apache.org/jira/browse/HDDS-1085
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1085-000.patch, HDDS-1085-001.patch, 
> HDDS-1085-002.patch
>
>
> We need to add an API to OM so that we can serve snapshots from the OM server.
>  - The snapshot should be streamed to fsck server with the ability to 
> throttle network utilization (like TransferFsImage)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1060) Token: Add api to get OM certificate from SCM

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769701#comment-16769701
 ] 

Hadoop QA commented on HDDS-1060:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 14s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 16s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1060 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958932/HDDS-1060.03.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 819f5475242b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / afe126d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2288/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2288/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2288/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2288/testReport/ |
| Max. process+thread count | 113 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2288/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Token: Add api to get OM certificate from SCM
> -
>
> Key: HDDS-1060
> URL: https://issues.apache.org/jira/browse/HDDS-1060
> 

[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-02-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769697#comment-16769697
 ] 

Íñigo Goiri commented on HDFS-14284:


One of the Routers is triggering some exceptions and it is very hard to know 
which of the Routers (currently 16) to check for more detailed logs.
The easy solution is to add the Router identifier to the exception that we 
throw from the Router.
However, this might be a common scenario in general and we may want to also 
change the ConfiguredFailoverProxyProvider to identify the source of the 
exception.
I think this might be similar for the Observer Namenodes if we have multiple of 
them.
[~xkrogen], [~csun], any thoughts on doing this generic?

In any case, I think we should add this to the Router side too.

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769695#comment-16769695
 ] 

Hadoop QA commented on HDDS-1053:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 51s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 51s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1053 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958933/HDDS-1053-004.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux e2e674c1ad5c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / afe126d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2287/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2287/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2287/testReport/ |
| Max. process+thread count | 133 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2287/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 

[jira] [Created] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-02-15 Thread JIRA
Íñigo Goiri created HDFS-14284:
--

 Summary: RBF: Log Router identifier when reporting exceptions
 Key: HDFS-14284
 URL: https://issues.apache.org/jira/browse/HDFS-14284
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri


The typical setup is to use multiple Routers through 
ConfiguredFailoverProxyProvider.
In a regular HA Namenode setup, it is easy to know which NN was used.
However, in RBF, any Router can be the one reporting the exception and it is 
hard to know which was the one.
We should have a way to identify which Router/Namenode was the one triggering 
the exception.
This would also apply with Observer Namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1116) Add java profiler servlet to the Ozone web servers

2019-02-15 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769683#comment-16769683
 ] 

Jitendra Nath Pandey commented on HDDS-1116:


Excellent!

> Add java profiler servlet to the Ozone web servers
> --
>
> Key: HDDS-1116
> URL: https://issues.apache.org/jira/browse/HDDS-1116
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Thanks to [~gopalv] we learned that [~prasanth_j] implemented a helper 
> servlet in Hive to initialize new [async 
> profiler|https://github.com/jvm-profiling-tools/async-profiler] sessions and 
> provide the svg based flame graph over HTTP. (see HIVE-20202)
> It seems to very useful as with this approach the profiling could be very 
> easy.
> This patch imports the servlet from the Hive code base to the Ozone code base 
> with minor modification (to make it work with our servlet containers)
>  * The two servlets are unified to one
>  * Streaming the svg to the browser based on IOUtils.copy 
>  * Output message is improved
> By default the profile servlet is turned off, but you can enable it with 
> 'hdds.profiler.endpoint.enabled=true' ozone-site.xml settings. In that case 
> you can access the /prof endpoint from scm,om,s3g. 
> You should upload the async profiler first 
> (https://github.com/jvm-profiling-tools/async-profiler) and set the 
> ASYNC_PROFILER_HOME environment variable to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-02-15 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769687#comment-16769687
 ] 

Fengnan Li commented on HDFS-14118:
---

[~elgoiri] Can you find someone taking a look at this patch? Thanks a lot! I 
will keep trying to find someone.

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: DNS testing log, HDFS-14118.001.patch, 
> HDFS-14118.002.patch, HDFS-14118.003.patch, HDFS-14118.004.patch, 
> HDFS-14118.005.patch, HDFS-14118.006.patch, HDFS-14118.007.patch, 
> HDFS-14118.008.patch, HDFS-14118.009.patch, HDFS-14118.010.patch, 
> HDFS-14118.011.patch, HDFS-14118.012.patch, HDFS-14118.013.patch, 
> HDFS-14118.014.patch, HDFS-14118.015.patch, HDFS-14118.016.patch, 
> HDFS-14118.017.patch, HDFS-14118.018.patch, HDFS-14118.019.patch, 
> HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1053:

Attachment: HDDS-1053-004.patch
Status: Patch Available  (was: Open)

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch, HDDS-1053-003.patch, HDDS-1053-004.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769678#comment-16769678
 ] 

Íñigo Goiri commented on HDFS-14258:


[^HDFS-14258.9.patch] LGTM.
+1
I'll commit once Yetus comes back.

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch, HDFS-14258.9.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1053:

Status: Open  (was: Patch Available)

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch, HDDS-1053-003.patch, HDDS-1053-004.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1114) Fix findbugs/checkstyle/accepteance errors in Ozone

2019-02-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769674#comment-16769674
 ] 

Hudson commented on HDDS-1114:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15980 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15980/])
HDDS-1114. Fix findbugs/checkstyle/accepteance errors in Ozone. (bharat: rev 
afe126d71f3f643d69626eb385b6d5491c1a3f86)
* (edit) hadoop-ozone/dist/src/main/smoketest/test.sh
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestRootList.java
* (edit) hadoop-ozone/dist/src/main/smoketest/ozonefs/ozonefs.robot
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java


> Fix findbugs/checkstyle/accepteance errors in Ozone
> ---
>
> Key: HDDS-1114
> URL: https://issues.apache.org/jira/browse/HDDS-1114
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Unfortunately as the previous two big commits (error handling HDDS-1068, 
> checkstyle HDDS-1103) are committed in the same time a few new errors are 
> introduced during the rebase.
> This patch will fix the remaining 5 issues (+ a type in the acceptance test 
> executor) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-02-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDDS-594:
---

Assignee: Ajay Kumar  (was: Xiaoyu Yao)

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1120) Add a config to disable checksum verification during read even though checksum data is present in the persisted data

2019-02-15 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1120:
--
Summary: Add a config to disable checksum verification during read even 
though checksum data is present in the persisted data  (was: Add a config to 
disable checksum verification during read if though checksum data is present in 
the persisted data)

> Add a config to disable checksum verification during read even though 
> checksum data is present in the persisted data
> 
>
> Key: HDDS-1120
> URL: https://issues.apache.org/jira/browse/HDDS-1120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
>
> Currently, if the checksum is computed during data write and persisted in the 
> disk, we will always end up verifying it while reading. This Jira aims to 
> selectively disable checksum verification during reads even though checksum 
> info is present in the data stored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1060) Token: Add api to get OM certificate from SCM

2019-02-15 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769669#comment-16769669
 ] 

Ajay Kumar commented on HDDS-1060:
--

patch v3 with few minor fixes in javadoc.

> Token: Add api to get OM certificate from SCM
> -
>
> Key: HDDS-1060
> URL: https://issues.apache.org/jira/browse/HDDS-1060
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Blocker, Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1060.00.patch, HDDS-1060.01.patch, 
> HDDS-1060.02.patch, HDDS-1060.03.patch
>
>
> Datanodes/OM need OM certificate to validate block tokens and delegation 
> tokens. 
> Add API for:
> 1. getCertificate(String certSerialId): To get certificate from SCM based on 
> certificate serial id.
> 2. getCACertificate(): To get CA certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1060) Token: Add api to get OM certificate from SCM

2019-02-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1060:
-
Attachment: HDDS-1060.03.patch

> Token: Add api to get OM certificate from SCM
> -
>
> Key: HDDS-1060
> URL: https://issues.apache.org/jira/browse/HDDS-1060
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Blocker, Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1060.00.patch, HDDS-1060.01.patch, 
> HDDS-1060.02.patch, HDDS-1060.03.patch
>
>
> Datanodes/OM need OM certificate to validate block tokens and delegation 
> tokens. 
> Add API for:
> 1. getCertificate(String certSerialId): To get certificate from SCM based on 
> certificate serial id.
> 2. getCACertificate(): To get CA certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769665#comment-16769665
 ] 

Hadoop QA commented on HDDS-1053:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 59s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  2s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1053 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958928/HDDS-1053-003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux d3374ed6fe18 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 217bdbd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2286/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2286/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2286/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2286/testReport/ |
| Max. process+thread count | 135 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2286/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: 

[jira] [Created] (HDDS-1120) Add a config to disable checksum verification during read if though checksum data is present in the persisted data

2019-02-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1120:
-

 Summary: Add a config to disable checksum verification during read 
if though checksum data is present in the persisted data
 Key: HDDS-1120
 URL: https://issues.apache.org/jira/browse/HDDS-1120
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


Currently, if the checksum is computed during data write and persisted in the 
disk, we will always end up verifying it while reading. This Jira aims to 
selectively disable checksum verification during reads even though checksum 
info is present in the data stored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1116) Add java profiler servlet to the Ozone web servers

2019-02-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769662#comment-16769662
 ] 

Hudson commented on HDDS-1116:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15979 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15979/])
HDDS-1116.Add java profiler servlet to the Ozone web servers. (aengineer: rev 
217bdbd940a96986df3b96899b43caae2b5a9ed2)
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
* (edit) hadoop-ozone/common/src/main/bin/ozone


> Add java profiler servlet to the Ozone web servers
> --
>
> Key: HDDS-1116
> URL: https://issues.apache.org/jira/browse/HDDS-1116
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Thanks to [~gopalv] we learned that [~prasanth_j] implemented a helper 
> servlet in Hive to initialize new [async 
> profiler|https://github.com/jvm-profiling-tools/async-profiler] sessions and 
> provide the svg based flame graph over HTTP. (see HIVE-20202)
> It seems to very useful as with this approach the profiling could be very 
> easy.
> This patch imports the servlet from the Hive code base to the Ozone code base 
> with minor modification (to make it work with our servlet containers)
>  * The two servlets are unified to one
>  * Streaming the svg to the browser based on IOUtils.copy 
>  * Output message is improved
> By default the profile servlet is turned off, but you can enable it with 
> 'hdds.profiler.endpoint.enabled=true' ozone-site.xml settings. In that case 
> you can access the /prof endpoint from scm,om,s3g. 
> You should upload the async profiler first 
> (https://github.com/jvm-profiling-tools/async-profiler) and set the 
> ASYNC_PROFILER_HOME environment variable to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available

2019-02-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14230:
---
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-13891

> RBF: Throw RetriableException instead of IOException when no namenodes 
> available
> 
>
> Key: HDFS-14230
> URL: https://issues.apache.org/jira/browse/HDFS-14230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14230-HDFS-13891.001.patch, 
> HDFS-14230-HDFS-13891.002.patch, HDFS-14230-HDFS-13891.003.patch, 
> HDFS-14230-HDFS-13891.004.patch, HDFS-14230-HDFS-13891.005.patch, 
> HDFS-14230-HDFS-13891.006.patch
>
>
> Failover usually happens when upgrading namenodes. And there are no active 
> namenodes within some seconds, Accessing HDFS through router fails at this 
> moment. This could make jobs  failure or hang. Some hive jobs logs are as 
> follow  
> {code:java}
> 2019-01-03 16:12:08,337 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 133.33 sec
> MapReduce Total cumulative CPU time: 2 minutes 13 seconds 330 msec
> Ended Job = job_1542178952162_24411913
> Launching Job 4 out of 6
> Exception in thread "Thread-86" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
> available under nameservice Cluster3
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:328)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:488)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:495)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:385)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:760)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1804)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1338)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3925)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> {code}
> Deep into the code. Maybe we can throw StandbyException when no namenodes 
> available. Client will 

[jira] [Work logged] (HDDS-1114) Fix findbugs/checkstyle/accepteance errors in Ozone

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1114?focusedWorklogId=199399=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199399
 ]

ASF GitHub Bot logged work on HDDS-1114:


Author: ASF GitHub Bot
Created on: 15/Feb/19 19:49
Start Date: 15/Feb/19 19:49
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #488: 
HDDS-1114. Fix findbugs/checkstyle/accepteance errors in Ozone
URL: https://github.com/apache/hadoop/pull/488
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 199399)
Time Spent: 0.5h  (was: 20m)

> Fix findbugs/checkstyle/accepteance errors in Ozone
> ---
>
> Key: HDDS-1114
> URL: https://issues.apache.org/jira/browse/HDDS-1114
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Unfortunately as the previous two big commits (error handling HDDS-1068, 
> checkstyle HDDS-1103) are committed in the same time a few new errors are 
> introduced during the rebase.
> This patch will fix the remaining 5 issues (+ a type in the acceptance test 
> executor) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1114) Fix findbugs/checkstyle/accepteance errors in Ozone

2019-02-15 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769654#comment-16769654
 ] 

Anu Engineer commented on HDDS-1114:


+1. Thanks for fixing this.

> Fix findbugs/checkstyle/accepteance errors in Ozone
> ---
>
> Key: HDDS-1114
> URL: https://issues.apache.org/jira/browse/HDDS-1114
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Unfortunately as the previous two big commits (error handling HDDS-1068, 
> checkstyle HDDS-1103) are committed in the same time a few new errors are 
> introduced during the rebase.
> This patch will fix the remaining 5 issues (+ a type in the acceptance test 
> executor) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13158) Fix Spelling Mistakes - DECOMISSIONED

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769649#comment-16769649
 ] 

Hadoop QA commented on HDFS-13158:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 15s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
39s{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}273m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |

[jira] [Updated] (HDDS-1116) Add java profiler servlet to the Ozone web servers

2019-02-15 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1116:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~elek] Thank you for the contribution. I have committed this to the trunk 
branch. Btw, it might be good to add this to documentation in the developer 
page. Thanks

> Add java profiler servlet to the Ozone web servers
> --
>
> Key: HDDS-1116
> URL: https://issues.apache.org/jira/browse/HDDS-1116
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Thanks to [~gopalv] we learned that [~prasanth_j] implemented a helper 
> servlet in Hive to initialize new [async 
> profiler|https://github.com/jvm-profiling-tools/async-profiler] sessions and 
> provide the svg based flame graph over HTTP. (see HIVE-20202)
> It seems to very useful as with this approach the profiling could be very 
> easy.
> This patch imports the servlet from the Hive code base to the Ozone code base 
> with minor modification (to make it work with our servlet containers)
>  * The two servlets are unified to one
>  * Streaming the svg to the browser based on IOUtils.copy 
>  * Output message is improved
> By default the profile servlet is turned off, but you can enable it with 
> 'hdds.profiler.endpoint.enabled=true' ozone-site.xml settings. In that case 
> you can access the /prof endpoint from scm,om,s3g. 
> You should upload the async profiler first 
> (https://github.com/jvm-profiling-tools/async-profiler) and set the 
> ASYNC_PROFILER_HOME environment variable to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769639#comment-16769639
 ] 

Arpit Agarwal commented on HDDS-1053:
-

Thanks [~avijayan]! +1 pending Jenkins for the v3 patch.

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch, HDDS-1053-003.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769640#comment-16769640
 ] 

Hadoop QA commented on HDDS-1053:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 15s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 29s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1053 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958921/HDDS-1053-002.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux e811bc892055 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / d10444e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2285/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2285/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2285/testReport/ |
| Max. process+thread count | 117 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2285/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, 

[jira] [Updated] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1053:

Attachment: HDDS-1053-003.patch
Status: Patch Available  (was: Open)

Added unit test for non default OmServiceId.

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch, HDDS-1053-003.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769635#comment-16769635
 ] 

Arpit Agarwal commented on HDDS-1053:
-

The patch LGTM. Let's add a unit test for the non-default serviceID case.

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1053:

Status: Open  (was: Patch Available)

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769621#comment-16769621
 ] 

Hanisha Koneru commented on HDDS-1053:
--

Thanks for the patch [~avijayan].
LGTM. +1 pending Jenkins.

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1092) Use Java 11 JRE to run Ozone in containers

2019-02-15 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769619#comment-16769619
 ] 

Anu Engineer commented on HDDS-1092:


[~ajisakaa]  San, In case you did not see it. I will keep you posted how this 
experiment is going; we are going to run a full test pass using JRE 11. Thought 
it might be interesting to you.

> Use Java 11 JRE to run Ozone in containers
> --
>
> Key: HDDS-1092
> URL: https://issues.apache.org/jira/browse/HDDS-1092
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1092-docker-hadoop-runner.002.patch, 
> HDDS-1092.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> As of now we use opendk 1.8.0 in the Ozone containers.
> Java 9 and Java 10 introduces advanced support for the resource management of 
> the containers and not all of them are available from the latest release of 
> 1.8.0. (see this blog for more details: 
> https://medium.com/adorsys/jvm-memory-settings-in-a-container-environment-64b0840e1d9e)
> I propose to switch to use Java 11 in the containers and test everything with 
> Java 11 at runtime.
> Note: this issue is just about the runtime jdk not about the compile time JDK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1117) Add async profiler to the hadoop-runner base container image

2019-02-15 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1117:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thank you for the contribution. I have committed this to the 
docker-hadoop-runner branch.

> Add async profiler to the hadoop-runner base container image
> 
>
> Key: HDDS-1117
> URL: https://issues.apache.org/jira/browse/HDDS-1117
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1116 provides a simple servlet to execute async profiler 
> (https://github.com/jvm-profiling-tools/async-profiler) thanks to the Hive 
> developers.
> To run it in the docker-composed based example environments we should add it 
> to the apache/hadoop-runner base image. 
> Note: The size is not significant, the downloadable package is 102k.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1117) Add async profiler to the hadoop-runner base container image

2019-02-15 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769610#comment-16769610
 ] 

Anu Engineer commented on HDDS-1117:


+1. I will commit this now.

> Add async profiler to the hadoop-runner base container image
> 
>
> Key: HDDS-1117
> URL: https://issues.apache.org/jira/browse/HDDS-1117
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1116 provides a simple servlet to execute async profiler 
> (https://github.com/jvm-profiling-tools/async-profiler) thanks to the Hive 
> developers.
> To run it in the docker-composed based example environments we should add it 
> to the apache/hadoop-runner base image. 
> Note: The size is not significant, the downloadable package is 102k.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1053:

Status: Patch Available  (was: Open)

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch, 
> HDDS-1053-002.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14258:
---
Status: Open  (was: Patch Available)

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch, HDFS-14258.9.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14258:
---
Attachment: HDFS-14258.9.patch

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch, HDFS-14258.9.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14258:
---
Status: Patch Available  (was: Open)

Try this patch... had to add a new local variable to the class, but allows for 
a dynamic "max wait time"

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch, HDFS-14258.9.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14268:
---
Fix Version/s: HDFS-13891

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch, HDFS-14268-HDFS-13891.002.patch, 
> HDFS-14268-HDFS-13891.003.patch, HDFS-14268-HDFS-13891.004.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1119) DN get the certificate from SCM CA for token validation

2019-02-15 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1119:


 Summary: DN get the certificate from SCM CA for token validation
 Key: HDDS-1119
 URL: https://issues.apache.org/jira/browse/HDDS-1119
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This is needed when the OM received delegation token signed by other OM 
instances and it does not have the certificate for foreign OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14259) RBF: Fix safemode message for Router

2019-02-15 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769577#comment-16769577
 ] 

Ranith Sardar commented on HDFS-14259:
--

[~elgoiri], very soon I will update the patch.

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1119) DN get the certificate from SCM CA for token validation

2019-02-15 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1119:
-
Description: This is needed when the DN received block token signed by OM 
and it does not have the certificate that OM.  (was: This is needed when the OM 
received delegation token signed by other OM instances and it does not have the 
certificate for foreign OM.)

> DN get the certificate from SCM CA for token validation
> ---
>
> Key: HDDS-1119
> URL: https://issues.apache.org/jira/browse/HDDS-1119
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This is needed when the DN received block token signed by OM and it does not 
> have the certificate that OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >