[jira] [Commented] (HDDS-805) Block token: Client api changes for block token

2018-11-29 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704374#comment-16704374
 ] 

Yiqun Lin commented on HDDS-805:


Some minor comments from me:

* Can we remove class {{OzoneBlockTokenSelector}}? As I see we can get token 
from XceiverClient.
* Can we separate the inner class {{ChunkOutputStreamEntry}} from 
{{ChunkGroupOutputStream}}? Since we have added builder constructor functions, 
this class become large enough.

BTW, which JIRA tracking the work of ozone block token validation? I didn't 
find this logic and also maybe I missed something.

 

> Block token: Client api changes for block token
> ---
>
> Key: HDDS-805
> URL: https://issues.apache.org/jira/browse/HDDS-805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-805-HDDS-4.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Description: 
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}

  was:
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

 

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}
 

 

 


> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir 

[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704301#comment-16704301
 ] 

Hadoop QA commented on HDDS-870:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 34s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestMultipleContainerReadWrite |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.container.common.helpers.TestBlockData |
|   | hadoop.ozone.scm.TestContainerSmallFile |
|   | 

[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Labels: RBF  (was: )

> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: Surendra Singh Lilhore
>Priority: Major
>  Labels: RBF
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Description: 
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd

3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}

  was:
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}


> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>   

[jira] [Commented] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704293#comment-16704293
 ] 

Hadoop QA commented on HDFS-14114:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
39s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14114 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950120/HDFS-14114.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 8e23434b34e1 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c9bfca2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25678/testReport/ |
| Max. process+thread count | 958 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25678/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Assigned] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-14117:
---

Assignee: Surendra Singh Lilhore  (was: venkata ram kumar ch)

> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: Surendra Singh Lilhore
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Description: 
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

 

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}
 

 

 

  was:
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

 

commands: 

1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd

 

 

 


> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of 

[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Description: 
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

 

commands: 

1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd

 

 

 

  was:
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.


> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
>  
> commands: 
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your 

[jira] [Created] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)
venkata ram kumar ch created HDFS-14117:
---

 Summary: RBF:We can only delete the files or dirs of one 
subcluster in a cluster with multiple subclusters when trash is enabled
 Key: HDFS-14117
 URL: https://issues.apache.org/jira/browse/HDFS-14117
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: venkata ram kumar ch


When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-14117:
---

Assignee: venkata ram kumar ch

> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc

2018-11-29 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704282#comment-16704282
 ] 

Siyao Meng commented on HDFS-13870:
---

[~jojochuang] [~linyiqun] Thanks for reviewing and committing!
[~brahmareddy] Thanks for closing the duplicate jira.

> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Affects Versions: 2.8.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Fix For: 2.10.0, 3.3.0
>
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13154) Webhdfs : update the Document for allow/disallow snapshots

2018-11-29 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HDFS-13154.
-
Resolution: Duplicate

Closing as duplicate to HDFS-13870, As it's already committed.

> Webhdfs : update the Document for allow/disallow snapshots
> --
>
> Key: HDFS-13154
> URL: https://issues.apache.org/jira/browse/HDFS-13154
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 2.8.2
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
>
> There is no Document for Allow/Disallow snapshots.
> http://hadoop.apache.org/docs/r2.8.3/hadoop-project-dist/hadoop-hdfs/WebHDFS.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-882) Provide a config to optionally turn on/off the sync flag during chunk writes

2018-11-29 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704279#comment-16704279
 ] 

Jitendra Nath Pandey commented on HDDS-882:
---

The patch looks good to me. +1
However, we didn't get clean jenkins run.

> Provide a config to optionally turn on/off the sync flag during chunk writes
> 
>
> Key: HDDS-882
> URL: https://issues.apache.org/jira/browse/HDDS-882
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: HDDS-882.000.patch
>
>
> Currently, chunk writes happen with sync flag on. We should add a config to 
> enable/disable this for performance benchmarks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-29 Thread Vinayakumar B (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704271#comment-16704271
 ] 

Vinayakumar B commented on HDFS-14075:
--

+1

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch, HDFS-14075-06.patch, 
> HDFS-14075-07.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc

2018-11-29 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704274#comment-16704274
 ] 

Brahma Reddy Battula commented on HDFS-13870:
-

There is HDFS-13154 Jira for same, I will duplicate HDFS-13154.

Thanks all.

> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Affects Versions: 2.8.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Fix For: 2.10.0, 3.3.0
>
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704258#comment-16704258
 ] 

Fei Hui commented on HDFS-14114:


Upload v004 patch, fix checkstyle

> RBF:MIN_ACTIVE_RATIO should be configurable
> ---
>
> Key: HDFS-14114
> URL: https://issues.apache.org/jira/browse/HDFS-14114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14114.001.patch, HDFS-14114.002.patch, 
> HDFS-14114.003.patch, HDFS-14114.004.patch
>
>
> The following code contains 
> {code:java}
>   if (timeSinceLastActive > connectionCleanupPeriodMs ||
>   active < MIN_ACTIVE_RATIO * total) {
> // Remove and close 1 connection
> List conns = pool.removeConnections(1);
> for (ConnectionContext conn : conns) {
>   conn.close();
> }
> LOG.debug("Removed connection {} used {} seconds ago. " +
> "Pool has {}/{} connections", pool.getConnectionPoolId(),
> TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
> pool.getNumConnections(), pool.getMaxSize());
>   }
> ...
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
> {code}
> It affects cleanup and creating Connections. Maybe it should be configurable 
> so that we can reconfig it to improve performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14114:
---
Attachment: HDFS-14114.004.patch

> RBF:MIN_ACTIVE_RATIO should be configurable
> ---
>
> Key: HDFS-14114
> URL: https://issues.apache.org/jira/browse/HDFS-14114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14114.001.patch, HDFS-14114.002.patch, 
> HDFS-14114.003.patch, HDFS-14114.004.patch
>
>
> The following code contains 
> {code:java}
>   if (timeSinceLastActive > connectionCleanupPeriodMs ||
>   active < MIN_ACTIVE_RATIO * total) {
> // Remove and close 1 connection
> List conns = pool.removeConnections(1);
> for (ConnectionContext conn : conns) {
>   conn.close();
> }
> LOG.debug("Removed connection {} used {} seconds ago. " +
> "Pool has {}/{} connections", pool.getConnectionPoolId(),
> TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
> pool.getNumConnections(), pool.getMaxSize());
>   }
> ...
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
> {code}
> It affects cleanup and creating Connections. Maybe it should be configurable 
> so that we can reconfig it to improve performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704234#comment-16704234
 ] 

Hadoop QA commented on HDFS-14114:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
26s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14114 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950113/HDFS-14114.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 8f8c62421fff 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bad1203 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25677/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25677/testReport/ |
| Max. process+thread count | 1407 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Commented] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc

2018-11-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704229#comment-16704229
 ] 

Hudson commented on HDFS-13870:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15534 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15534/])
HDFS-13870. WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API (yqlin: 
rev 0e36e935d909862401890d0a5410204504f48b31)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md


> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Affects Versions: 2.8.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Fix For: 2.10.0, 3.3.0
>
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc

2018-11-29 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13870:
-
Affects Version/s: 2.8.0

> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Affects Versions: 2.8.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Fix For: 2.10.0, 3.3.0
>
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc

2018-11-29 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13870:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

Thanks [~smeng] for the contribution and thanks [~jojochuang] for the review.

> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Fix For: 2.10.0, 3.3.0
>
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-11-29 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704222#comment-16704222
 ] 

Shashikant Banerjee commented on HDDS-870:
--

Patch v3 fixes some checkstyle and some unintended changes introduced with 
patch v2.

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-11-29 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-870:
-
Attachment: HDDS-870.003.patch

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-11-29 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704220#comment-16704220
 ] 

Shashikant Banerjee edited comment on HDDS-870 at 11/30/18 3:38 AM:


Thanks [~jnp] for the review. 
{code:java}
It seems error prone to create a bufferList in ChunkGroupOutputStream and share 
it in various ChunkOutputStreams within. The two streams may start working on 
same buffer?
{code}
Once one ChunkOutpuStream closes, we start writing the next ChunkOutputstream. 
There can be no possibility of two underlying streams to act on the same 
buffers concurrently.

In case, an exception is encountered which needs to be handled, the data 
residing in the buffer has to be moved to the next stream in list containing a 
different block. In such cases, data has to be shared among underlying 
streams.So, It seems to be making more sense to maintain it in 
ChunkGroupOutPutStream rather than each ChunkOutputStream. 

The allocation of buffers has been moved to ChunkPutStream so that, the buffers 
get allocated only when write is requested, Otherwise, for an empty key, there 
will be no allocation of buffers.


was (Author: shashikant):
Thanks [~jnp] for the review. 
{code:java}
It seems error prone to create a bufferList in ChunkGroupOutputStream and share 
it in various ChunkOutputStreams within. The two streams may start working on 
same buffer?
{code}
Once one ChunkOutpuStream closes, we start writing the next ChunkOutputstream. 
There can be no possibility of two underlying streams to act on the same 
buffers concurrently.

In case, an exception is encountered which needs to be handled, the data 
residing in the buffer has to be moved to the next stream in list containing a 
different block. In such cases, data has ton be shared among underlying 
streams.So, It seems to be making more sense to maintain it in 
ChunkGroupOutPutStream rather than each ChunkOutputStream. 

The allocation of buffers has been moved to ChunkPutStream so that, the buffers 
get allocated only when write is requested, Otherwise, for an empty key, there 
will be no allocation of buffers.

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc

2018-11-29 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13870:
-
Summary: WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc  
(was: WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT)

> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-11-29 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704220#comment-16704220
 ] 

Shashikant Banerjee commented on HDDS-870:
--

Thanks [~jnp] for the review. 
{code:java}
It seems error prone to create a bufferList in ChunkGroupOutputStream and share 
it in various ChunkOutputStreams within. The two streams may start working on 
same buffer?
{code}
Once one ChunkOutpuStream closes, we start writing the next ChunkOutputstream. 
There can be no possibility of two underlying streams to act on the same 
buffers concurrently.

In case, an exception is encountered which needs to be handled, the data 
residing in the buffer has to be moved to the next stream in list containing a 
different block. In such cases, data has ton be shared among underlying 
streams.So, It seems to be making more sense to maintain it in 
ChunkGroupOutPutStream rather than each ChunkOutputStream. 

The allocation of buffers has been moved to ChunkPutStream so that, the buffers 
get allocated only when write is requested, Otherwise, for an empty key, there 
will be no allocation of buffers.

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704211#comment-16704211
 ] 

Hadoop QA commented on HDFS-14114:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
38s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14114 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950110/HDFS-14114.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 83ac45d7d181 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bad1203 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25676/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25676/testReport/ |
| Max. process+thread count | 1353 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Commented] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT

2018-11-29 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704212#comment-16704212
 ] 

Yiqun Lin commented on HDFS-13870:
--

Apply the change in my local and it renders well. +1.

Committing this..

> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704188#comment-16704188
 ] 

Pranay Singh commented on HDFS-14084:
-

Thanks [~elgoiri] will check RPCServer.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-858) Start a Standalone Ratis Server on OM

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704184#comment-16704184
 ] 

Hadoop QA commented on HDDS-858:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 0s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 1 
fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  8s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 46s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704177#comment-16704177
 ] 

Fei Hui commented on HDFS-14114:


[~elgoiri] Thanks for your comments
Upload v003 patch. Delete new lines and add unit test

> RBF:MIN_ACTIVE_RATIO should be configurable
> ---
>
> Key: HDFS-14114
> URL: https://issues.apache.org/jira/browse/HDFS-14114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14114.001.patch, HDFS-14114.002.patch, 
> HDFS-14114.003.patch
>
>
> The following code contains 
> {code:java}
>   if (timeSinceLastActive > connectionCleanupPeriodMs ||
>   active < MIN_ACTIVE_RATIO * total) {
> // Remove and close 1 connection
> List conns = pool.removeConnections(1);
> for (ConnectionContext conn : conns) {
>   conn.close();
> }
> LOG.debug("Removed connection {} used {} seconds ago. " +
> "Pool has {}/{} connections", pool.getConnectionPoolId(),
> TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
> pool.getNumConnections(), pool.getMaxSize());
>   }
> ...
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
> {code}
> It affects cleanup and creating Connections. Maybe it should be configurable 
> so that we can reconfig it to improve performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14114:
---
Attachment: HDFS-14114.003.patch

> RBF:MIN_ACTIVE_RATIO should be configurable
> ---
>
> Key: HDFS-14114
> URL: https://issues.apache.org/jira/browse/HDFS-14114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14114.001.patch, HDFS-14114.002.patch, 
> HDFS-14114.003.patch
>
>
> The following code contains 
> {code:java}
>   if (timeSinceLastActive > connectionCleanupPeriodMs ||
>   active < MIN_ACTIVE_RATIO * total) {
> // Remove and close 1 connection
> List conns = pool.removeConnections(1);
> for (ConnectionContext conn : conns) {
>   conn.close();
> }
> LOG.debug("Removed connection {} used {} seconds ago. " +
> "Pool has {}/{} connections", pool.getConnectionPoolId(),
> TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
> pool.getNumConnections(), pool.getMaxSize());
>   }
> ...
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
> {code}
> It affects cleanup and creating Connections. Maybe it should be configurable 
> so that we can reconfig it to improve performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14114:
---
Attachment: (was: HDFS-14114.003.patch)

> RBF:MIN_ACTIVE_RATIO should be configurable
> ---
>
> Key: HDFS-14114
> URL: https://issues.apache.org/jira/browse/HDFS-14114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14114.001.patch, HDFS-14114.002.patch, 
> HDFS-14114.003.patch
>
>
> The following code contains 
> {code:java}
>   if (timeSinceLastActive > connectionCleanupPeriodMs ||
>   active < MIN_ACTIVE_RATIO * total) {
> // Remove and close 1 connection
> List conns = pool.removeConnections(1);
> for (ConnectionContext conn : conns) {
>   conn.close();
> }
> LOG.debug("Removed connection {} used {} seconds ago. " +
> "Pool has {}/{} connections", pool.getConnectionPoolId(),
> TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
> pool.getNumConnections(), pool.getMaxSize());
>   }
> ...
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
> {code}
> It affects cleanup and creating Connections. Maybe it should be configurable 
> so that we can reconfig it to improve performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14114:
---
Attachment: HDFS-14114.003.patch

> RBF:MIN_ACTIVE_RATIO should be configurable
> ---
>
> Key: HDFS-14114
> URL: https://issues.apache.org/jira/browse/HDFS-14114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14114.001.patch, HDFS-14114.002.patch, 
> HDFS-14114.003.patch
>
>
> The following code contains 
> {code:java}
>   if (timeSinceLastActive > connectionCleanupPeriodMs ||
>   active < MIN_ACTIVE_RATIO * total) {
> // Remove and close 1 connection
> List conns = pool.removeConnections(1);
> for (ConnectionContext conn : conns) {
>   conn.close();
> }
> LOG.debug("Removed connection {} used {} seconds ago. " +
> "Pool has {}/{} connections", pool.getConnectionPoolId(),
> TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
> pool.getNumConnections(), pool.getMaxSize());
>   }
> ...
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
> {code}
> It affects cleanup and creating Connections. Maybe it should be configurable 
> so that we can reconfig it to improve performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14114:
---
Attachment: (was: HDFS-14114.003.patch)

> RBF:MIN_ACTIVE_RATIO should be configurable
> ---
>
> Key: HDFS-14114
> URL: https://issues.apache.org/jira/browse/HDFS-14114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14114.001.patch, HDFS-14114.002.patch
>
>
> The following code contains 
> {code:java}
>   if (timeSinceLastActive > connectionCleanupPeriodMs ||
>   active < MIN_ACTIVE_RATIO * total) {
> // Remove and close 1 connection
> List conns = pool.removeConnections(1);
> for (ConnectionContext conn : conns) {
>   conn.close();
> }
> LOG.debug("Removed connection {} used {} seconds ago. " +
> "Pool has {}/{} connections", pool.getConnectionPoolId(),
> TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
> pool.getNumConnections(), pool.getMaxSize());
>   }
> ...
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
> {code}
> It affects cleanup and creating Connections. Maybe it should be configurable 
> so that we can reconfig it to improve performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-805) Block token: Client api changes for block token

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704166#comment-16704166
 ] 

Hadoop QA commented on HDDS-805:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
27s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
25s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 13s{color} | {color:orange} root: The patch generated 7 new + 1 unchanged - 
2 fixed = 8 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestChunkStreams |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Updated] (HDFS-14114) RBF:MIN_ACTIVE_RATIO should be configurable

2018-11-29 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14114:
---
Attachment: HDFS-14114.003.patch

> RBF:MIN_ACTIVE_RATIO should be configurable
> ---
>
> Key: HDFS-14114
> URL: https://issues.apache.org/jira/browse/HDFS-14114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14114.001.patch, HDFS-14114.002.patch, 
> HDFS-14114.003.patch
>
>
> The following code contains 
> {code:java}
>   if (timeSinceLastActive > connectionCleanupPeriodMs ||
>   active < MIN_ACTIVE_RATIO * total) {
> // Remove and close 1 connection
> List conns = pool.removeConnections(1);
> for (ConnectionContext conn : conns) {
>   conn.close();
> }
> LOG.debug("Removed connection {} used {} seconds ago. " +
> "Pool has {}/{} connections", pool.getConnectionPoolId(),
> TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
> pool.getNumConnections(), pool.getMaxSize());
>   }
> ...
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
> {code}
> It affects cleanup and creating Connections. Maybe it should be configurable 
> so that we can reconfig it to improve performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704154#comment-16704154
 ] 

Íñigo Goiri edited comment on HDFS-14084 at 11/30/18 2:22 AM:
--

{code}
Íñigo Goiri the issue is that we don't have enough metrics on DFSClient side, 
so it is hard to troubleshoot issues at the client side.
{code}
I got that; I'm just pointing out that we should use some of the code 
abstraction already existing in Hadoop like having proper classes and using 
metric counters. Right now, this is implemented raw while there are helper 
tools already available.

To be more specific, check the implementations of RPCServer.
They don't need to wrap every single function they want to monitor.
There is a framework that captures every call and tracks it.
This should use a similar approach where all calls are monitored without having 
to wrap one by one.


was (Author: elgoiri):
{code}
Íñigo Goiri the issue is that we don't have enough metrics on DFSClient side, 
so it is hard to troubleshoot issues at the client side.
{code}
I got that; I'm just pointing out that we should use some of the code 
abstraction already existing in Hadoop like having proper classes and using 
metric counters. Right now, this is implemented raw while there are helper 
tools already available.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704154#comment-16704154
 ] 

Íñigo Goiri commented on HDFS-14084:


{code}
Íñigo Goiri the issue is that we don't have enough metrics on DFSClient side, 
so it is hard to troubleshoot issues at the client side.
{code}
I got that; I'm just pointing out that we should use some of the code 
abstraction already existing in Hadoop like having proper classes and using 
metric counters. Right now, this is implemented raw while there are helper 
tools already available.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9572) Prevent DataNode log spam if a client connects on the data transfer port but sends no data.

2018-11-29 Thread Tong Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704145#comment-16704145
 ] 

Tong Wu commented on HDFS-9572:
---

[~cnauroth] Is it possible throw {{EOFException}}  or CloseChannelException 
because of any real problems, such as connetions disconnected? Thank you in 
advance

> Prevent DataNode log spam if a client connects on the data transfer port but 
> sends no data.
> ---
>
> Key: HDFS-9572
> URL: https://issues.apache.org/jira/browse/HDFS-9572
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-9572.001.patch
>
>
> Monitoring tools may choose to check liveness of the DataNode's data transfer 
> port by connecting to it.  The monitoring tool will close the connection 
> immediately after establishment without sending any data.  When this happens, 
> the DataNode encounters an unexpected EOF and logs a full stack trace.  This 
> creates unneeded noise in the logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704140#comment-16704140
 ] 

Pranay Singh commented on HDFS-14084:
-

[~elgoiri] the issue is that we don't have enough metrics on DFSClient side, so 
it is hard to troubleshoot issues at the client side.
 

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704126#comment-16704126
 ] 

Íñigo Goiri commented on HDFS-14084:


I just realized that  [^HDFS-14084.001.patch] was attached.
I think this should use the metrics infrastructure already available in Hadoop.
RPC server for example collects most of these parameters in a generic way and 
it provides histograms, etc.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-805) Block token: Client api changes for block token

2018-11-29 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-805:

Status: Patch Available  (was: Open)

> Block token: Client api changes for block token
> ---
>
> Key: HDDS-805
> URL: https://issues.apache.org/jira/browse/HDDS-805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-805-HDDS-4.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-805) Block token: Client api changes for block token

2018-11-29 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-805:

Attachment: HDDS-805-HDDS-4.00.patch

> Block token: Client api changes for block token
> ---
>
> Key: HDDS-805
> URL: https://issues.apache.org/jira/browse/HDDS-805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-805-HDDS-4.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-858) Start a Standalone Ratis Server on OM

2018-11-29 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704062#comment-16704062
 ] 

Hanisha Koneru commented on HDDS-858:
-

Thank you [~anu] for the review.
Fixed a test failure in TestOzoneManagerRatisServer. Other test failures are 
not related to this patch.

> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-858.002.patch, HDDS-858.003.patch, 
> HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-858) Start a Standalone Ratis Server on OM

2018-11-29 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-858:

Attachment: HDDS-858.003.patch

> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-858.002.patch, HDDS-858.003.patch, 
> HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-885) Fix test failures due to ChecksumData

2018-11-29 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-885:

Attachment: HDDS-885.001.patch

> Fix test failures due to ChecksumData
> -
>
> Key: HDDS-885
> URL: https://issues.apache.org/jira/browse/HDDS-885
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-885.001.patch
>
>
> HDDS-284 introduced test failures in the following:
>  # TestHddsDispatcher
>  # TestOzoneConfigurationFields
>  # TestBlockDeletingService
>  # TestBlockData
>  # TestContainerSmallFile
> This Jira aims to fix these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-885) Fix test failures due to ChecksumData

2018-11-29 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-885:

Description: 
HDDS-284 introduced test failures in the following:
 # TestHddsDispatcher
 # TestOzoneConfigurationFields
 # TestBlockDeletingService
 # TestBlockData
 # TestContainerSmallFile

This Jira aims to fix these.

  was:
HDDS-284 introduced test failures in the following:

# TestHddsDispatcher
# TestHddsDispatcher
# TestOzoneConfigurationFields
# TestBlockDeletingService
# TestBlockData
# TestContainerSmallFile

This Jira aims to fix these.


> Fix test failures due to ChecksumData
> -
>
> Key: HDDS-885
> URL: https://issues.apache.org/jira/browse/HDDS-885
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>
> HDDS-284 introduced test failures in the following:
>  # TestHddsDispatcher
>  # TestOzoneConfigurationFields
>  # TestBlockDeletingService
>  # TestBlockData
>  # TestContainerSmallFile
> This Jira aims to fix these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704045#comment-16704045
 ] 

Hadoop QA commented on HDFS-14084:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 21 new + 45 unchanged - 0 fixed = 66 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Should org.apache.hadoop.hdfs.DFSClient$NamenodeRpcStat be a _static_ 
inner class?  At DFSClient.java:inner class?  At DFSClient.java:[lines 210-287] 
|
|  |  Unread field:field be static?  At DFSClient.java:[line 214] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14084 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950088/HDFS-14084.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d2177145276d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0081b02 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | 

[jira] [Created] (HDDS-885) Fix test failures due to ChecksumData

2018-11-29 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-885:
---

 Summary: Fix test failures due to ChecksumData
 Key: HDDS-885
 URL: https://issues.apache.org/jira/browse/HDDS-885
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


HDDS-284 introduced test failures in the following:

# TestHddsDispatcher
# TestHddsDispatcher
# TestOzoneConfigurationFields
# TestBlockDeletingService
# TestBlockData
# TestContainerSmallFile

This Jira aims to fix these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-880) Create api for ACL handling in Ozone

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704005#comment-16704005
 ] 

Hadoop QA commented on HDDS-880:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-880 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950066/HDDS-880.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 00e1416d3b89 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae5fbdd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704002#comment-16704002
 ] 

Íñigo Goiri commented on HDFS-14084:


Internally, we have what we described in HDFS-12861.
This is a fair start for investigation on what's going on.
We also extended HTrace to collect resource utilization but that goes in a 
different direction.
In addition to throughput (as shown in HDFS-12861) what other metrics you are 
interested on?
DFSClient has plenty of metrics but they are not readily available and the 
easiest way to get them is to dump them as a log; not the most convenient.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh resolved HDFS-14084.
-
Resolution: Fixed

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-14084 started by Pranay Singh.
---
> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-884) Fix merge issue that causes NPE OzoneManager#httpServer

2018-11-29 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-884:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~anu] for the review. Committed to the feature branch. Glad to see the 
branch green again on Jenkins.

> Fix merge issue that causes NPE OzoneManager#httpServer
> ---
>
> Key: HDDS-884
> URL: https://issues.apache.org/jira/browse/HDDS-884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDDS-884-HDDS-4.001.patch
>
>
> There is one line to instantiate httpServer missing due to the code movement 
> on HDDS-4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh reopened HDFS-14084:
-

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14084:

Status: Patch Available  (was: In Progress)

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14084) Need for more stats in DFSClient

2018-11-29 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14084:

Attachment: HDFS-14084.001.patch

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-884) Fix merge issue that causes NPE OzoneManager#httpServer

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703931#comment-16703931
 ] 

Hadoop QA commented on HDDS-884:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
57s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-884 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950078/HDDS-884-HDDS-4.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1fe1661c913f 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / 187bbbe |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1842/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1842/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix merge issue that causes NPE 

[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-11-29 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703930#comment-16703930
 ] 

Konstantin Shvachko commented on HDFS-14116:


Looks like {{NNThroughputBenchmark}} calls a wrong variant of {{createProxy()}}.
{{DFSTestUtil.getRefreshUserMappingsProtocolProxy()}} is only used in 
{{NNThroughputBenchmark}} (introduced by HDFS-7847).
We should be able to replace it with a more generic variant, which would be 
smart about proxy factories. Like {{createNonHAProxy()}}?

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Priority: Major
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-858) Start a Standalone Ratis Server on OM

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703923#comment-16703923
 ] 

Hadoop QA commented on HDDS-858:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
43s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 1 
fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  2s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 37s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HDFS-14112) Avoid recursive call to external authorizer for getContentSummary.

2018-11-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703889#comment-16703889
 ] 

Hudson commented on HDFS-14112:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15532 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15532/])
HDFS-14112. Avoid recursive call to external authorizer for (szetszwo: rev 
0081b02e35306cb757c63d0f11a536941d73a139)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


> Avoid recursive call to external authorizer for getContentSummary.
> --
>
> Key: HDFS-14112
> URL: https://issues.apache.org/jira/browse/HDFS-14112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
>Priority: Critical
> Fix For: 3.2.1
>
> Attachments: h14112_20181128.patch, h14112_20181129.patch
>
>
> HDFS-12130 optimizes permission check, and invokes permission checker 
> recursively for each component of the tree, which works well for FSPermission 
> checker.
> But for certain external authorizers it may be more efficient to make one 
> call with {{subaccess}}, because often they don't have to evaluate for each 
> and every component of the path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14112) Avoid recursive call to external authorizer for getContentSummary.

2018-11-29 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-14112:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.1
   Status: Resolved  (was: Patch Available)

Thanks [~jnp] for reviewing the patch.

I have committed this.

> Avoid recursive call to external authorizer for getContentSummary.
> --
>
> Key: HDFS-14112
> URL: https://issues.apache.org/jira/browse/HDFS-14112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
>Priority: Critical
> Fix For: 3.2.1
>
> Attachments: h14112_20181128.patch, h14112_20181129.patch
>
>
> HDFS-12130 optimizes permission check, and invokes permission checker 
> recursively for each component of the tree, which works well for FSPermission 
> checker.
> But for certain external authorizers it may be more efficient to make one 
> call with {{subaccess}}, because often they don't have to evaluate for each 
> and every component of the path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13547) Add ingress port based sasl resolver

2018-11-29 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang resolved HDFS-13547.
---
   Resolution: Fixed
Fix Version/s: 3.1.1

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13547.001.patch, HDFS-13547.002.patch, 
> HDFS-13547.003.patch, HDFS-13547.004.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-11-29 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703877#comment-16703877
 ] 

Chen Liang commented on HDFS-14116:
---

Thanks [~csun]! Yeah I can another look later. Posting error stack trace when 
running NNThroughput here for record:
{code}
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be cast 
to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory
at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:118)
at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProviderWithIPFailover.(ObserverReadProxyProviderWithIPFailover.java:99)
at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProviderWithIPFailover.(ObserverReadProxyProviderWithIPFailover.java:86)
... 12 more
{code}

{code}
Exception in thread "main" java.io.IOException: Couldn't create proxy provider 
class 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProviderWithIPFailover
at 
org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261)
at 
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:115)
at 
org.apache.hadoop.hdfs.DFSTestUtil.getRefreshUserMappingsProtocolProxy(DFSTestUtil.java:2022)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1524)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1432)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1552)
{code}

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Priority: Major
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14112) Avoid recursive call to external authorizer for getContentSummary.

2018-11-29 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703871#comment-16703871
 ] 

Jitendra Nath Pandey commented on HDFS-14112:
-

+1 for the latest patch.

> Avoid recursive call to external authorizer for getContentSummary.
> --
>
> Key: HDFS-14112
> URL: https://issues.apache.org/jira/browse/HDFS-14112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
>Priority: Critical
> Attachments: h14112_20181128.patch, h14112_20181129.patch
>
>
> HDFS-12130 optimizes permission check, and invokes permission checker 
> recursively for each component of the tree, which works well for FSPermission 
> checker.
> But for certain external authorizers it may be more efficient to make one 
> call with {{subaccess}}, because often they don't have to evaluate for each 
> and every component of the path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-884) Fix merge issue that causes NPE OzoneManager#httpServer

2018-11-29 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703866#comment-16703866
 ] 

Anu Engineer commented on HDDS-884:
---

+1, pending Jenkins. Thanks for catching and fixing this.

> Fix merge issue that causes NPE OzoneManager#httpServer
> ---
>
> Key: HDDS-884
> URL: https://issues.apache.org/jira/browse/HDDS-884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDDS-884-HDDS-4.001.patch
>
>
> There is one line to instantiate httpServer missing due to the code movement 
> on HDDS-4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-884) Fix merge issue that causes NPE OzoneManager#httpServer

2018-11-29 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-884:
--
Affects Version/s: 0.3.0
 Target Version/s: 0.4.0

> Fix merge issue that causes NPE OzoneManager#httpServer
> ---
>
> Key: HDDS-884
> URL: https://issues.apache.org/jira/browse/HDDS-884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDDS-884-HDDS-4.001.patch
>
>
> There is one line to instantiate httpServer missing due to the code movement 
> on HDDS-4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-880) Create api for ACL handling in Ozone

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703863#comment-16703863
 ] 

Hadoop QA commented on HDDS-880:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-880 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950066/HDDS-880.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dd23b9904330 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae5fbdd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Updated] (HDDS-884) Fix merge issue that causes NPE OzoneManager#httpServer

2018-11-29 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-884:

Status: Patch Available  (was: Open)

> Fix merge issue that causes NPE OzoneManager#httpServer
> ---
>
> Key: HDDS-884
> URL: https://issues.apache.org/jira/browse/HDDS-884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDDS-884-HDDS-4.001.patch
>
>
> There is one line to instantiate httpServer missing due to the code movement 
> on HDDS-4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-884) Fix merge issue that causes NPE OzoneManager#httpServer

2018-11-29 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-884:

Attachment: HDDS-884-HDDS-4.001.patch

> Fix merge issue that causes NPE OzoneManager#httpServer
> ---
>
> Key: HDDS-884
> URL: https://issues.apache.org/jira/browse/HDDS-884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDDS-884-HDDS-4.001.patch
>
>
> There is one line to instantiate httpServer missing due to the code movement 
> on HDDS-4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-884) Fix merge issue that causes NPE OzoneManager#httpServer

2018-11-29 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-884:
---

 Summary: Fix merge issue that causes NPE OzoneManager#httpServer
 Key: HDDS-884
 URL: https://issues.apache.org/jira/browse/HDDS-884
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


There is one line to instantiate httpServer missing due to the code movement on 
HDDS-4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-880) Create api for ACL handling in Ozone

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703849#comment-16703849
 ] 

Hadoop QA commented on HDDS-880:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 45m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-880 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950061/HDDS-880.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 39fe930dd44b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f534736 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-13547) Add ingress port based sasl resolver

2018-11-29 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703845#comment-16703845
 ] 

Chen Liang commented on HDFS-13547:
---

Committed v004 patch to branch-3 and branch-3.1.1.

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13547.001.patch, HDFS-13547.002.patch, 
> HDFS-13547.003.patch, HDFS-13547.004.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-11-29 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703828#comment-16703828
 ] 

Chao Sun commented on HDFS-14116:
-

Interesting. Thanks for reporting this [~vagarychen]. I cannot think of a use 
case for alignment context on NameNode protocols. Perhaps we can just go with 
option 2) and add a simple check before setting the alignment context?

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Priority: Major
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14112) Avoid recursive call to external authorizer for getContentSummary.

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703842#comment-16703842
 ] 

Hadoop QA commented on HDFS-14112:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 522 unchanged - 0 fixed = 523 total (was 522) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14112 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950058/h14112_20181129.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 955a71ae7d18 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f534736 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703823#comment-16703823
 ] 

Hadoop QA commented on HDFS-14081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950056/HDFS-14081.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 69c82b7ab4ce 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f534736 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25673/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25673/testReport/ |
| Max. process+thread count | 5048 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25673/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HDFS-13547) Add ingress port based sasl resolver

2018-11-29 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703810#comment-16703810
 ] 

Chen Liang commented on HDFS-13547:
---

Thanks for checking [~vinodkv]! Will commit to 3.1.1 to branch-3 soon.

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13547.001.patch, HDFS-13547.002.patch, 
> HDFS-13547.003.patch, HDFS-13547.004.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-13547) Add ingress port based sasl resolver

2018-11-29 Thread Vinod Kumar Vavilapalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reopened HDFS-13547:


Oh, and this never made it to branch-3 either. [~vagarychen], I am reopening 
this, please put this in branch-3 too.

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13547.001.patch, HDFS-13547.002.patch, 
> HDFS-13547.003.patch, HDFS-13547.004.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13547) Add ingress port based sasl resolver

2018-11-29 Thread Vinod Kumar Vavilapalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-13547:
---
Fix Version/s: (was: 3.1.1)
   3.2.0

I just checked the branches, this never made it to 3.1.1 even though the 
fix-version is set so.

It's only in branch-3.2, branch-3.2.0 and trunk.

Release-notes for 3.1.1 (which is already released) are broken, but it is what 
it is.

Editing the fix-version.

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13547.001.patch, HDFS-13547.002.patch, 
> HDFS-13547.003.patch, HDFS-13547.004.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-11-29 Thread Chen Liang (JIRA)
Chen Liang created HDFS-14116:
-

 Summary: Fix a potential class cast error in 
ObserverReadProxyProvider
 Key: HDFS-14116
 URL: https://issues.apache.org/jira/browse/HDFS-14116
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Chen Liang


Currently in {{ObserverReadProxyProvider}} constructor there is this line 
{code}
((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
{code}
This could potentially cause failure, because it is possible that factory can 
not be casted here. Specifically,  
{{NameNodeProxiesClient.createFailoverProxyProvider}} is where the constructor 
will be called, and there are two paths that could call into this:
(1).{{NameNodeProxies.createProxy}}
(2).{{NameNodeProxiesClient.createFailoverProxyProvider}}

(2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
{{NameNodeHAProxyFactory}} which can not be casted to {{ClientHAProxyFactory}}, 
this happens when, for example, running NNThroughputBenmarck. To fix this we 
can at least:
1. introduce setAlignmentContext to HAProxyFactory which is the parent of both  
ClientHAProxyFactory and NameNodeHAProxyFactory OR
2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
if check with reflection. 
Depending on whether it make sense to have alignment context for the case (1) 
calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-880) Create api for ACL handling in Ozone

2018-11-29 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-880:

Attachment: HDDS-880.03.patch

> Create api for ACL handling in Ozone
> 
>
> Key: HDDS-880
> URL: https://issues.apache.org/jira/browse/HDDS-880
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDDS-880.00.patch, HDDS-880.01.patch, HDDS-880.02.patch, 
> HDDS-880.03.patch
>
>
> Create api for ACL handling in Ozone,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-877) Ensure correct surefire version for Ozone test

2018-11-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703724#comment-16703724
 ] 

Hudson commented on HDDS-877:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15531 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15531/])
HDDS-877. Ensure correct surefire version for Ozone test. Contributed by (xyao: 
rev ae5fbdd9ed6ef09b588637f2eadd7a04e8382289)
* (edit) hadoop-ozone/pom.xml
* (edit) hadoop-hdds/pom.xml


> Ensure correct surefire version for Ozone test
> --
>
> Key: HDDS-877
> URL: https://issues.apache.org/jira/browse/HDDS-877
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-877-HDDS-4.001.patch, HDDS-877-HDDS-4.002.patch
>
>
> Currently all Ozone test are failing due to buggy version of surefile is 
> being used even after HADOOP-15916.  This ticket is opened to fix this in 
> HDDS-4 or trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-877) Ensure correct surefire version for Ozone test

2018-11-29 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-877:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks all for the reviews. I've commit the fix to trunk.

> Ensure correct surefire version for Ozone test
> --
>
> Key: HDDS-877
> URL: https://issues.apache.org/jira/browse/HDDS-877
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-877-HDDS-4.001.patch, HDDS-877-HDDS-4.002.patch
>
>
> Currently all Ozone test are failing due to buggy version of surefile is 
> being used even after HADOOP-15916.  This ticket is opened to fix this in 
> HDDS-4 or trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-877) Ensure correct surefire version for Ozone test

2018-11-29 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-877:

Description: Currently all Ozone test are failing due to buggy version of 
surefile is being used even after HADOOP-15916.  This ticket is opened to fix 
this in HDDS-4 or trunk.   (was: Currently all test are failing due to buggy 
version of surefile is being used even after HADOOP-15916.  This ticket is 
opened to fix this in HDDS-4 or trunk. )

> Ensure correct surefire version for Ozone test
> --
>
> Key: HDDS-877
> URL: https://issues.apache.org/jira/browse/HDDS-877
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-877-HDDS-4.001.patch, HDDS-877-HDDS-4.002.patch
>
>
> Currently all Ozone test are failing due to buggy version of surefile is 
> being used even after HADOOP-15916.  This ticket is opened to fix this in 
> HDDS-4 or trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-877) Ensure correct surefire version for Ozone test

2018-11-29 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-877:

Issue Type: Test  (was: Sub-task)
Parent: (was: HDDS-4)

> Ensure correct surefire version for Ozone test
> --
>
> Key: HDDS-877
> URL: https://issues.apache.org/jira/browse/HDDS-877
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-877-HDDS-4.001.patch, HDDS-877-HDDS-4.002.patch
>
>
> Currently all test are failing due to buggy version of surefile is being used 
> even after HADOOP-15916.  This ticket is opened to fix this in HDDS-4 or 
> trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-877) Ensure correct surefire version for Ozone test

2018-11-29 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-877:

Summary: Ensure correct surefire version for Ozone test  (was: Ensure 
correct surefire version is being used for Ozone test)

> Ensure correct surefire version for Ozone test
> --
>
> Key: HDDS-877
> URL: https://issues.apache.org/jira/browse/HDDS-877
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-877-HDDS-4.001.patch, HDDS-877-HDDS-4.002.patch
>
>
> Currently all test are failing due to buggy version of surefile is being used 
> even after HADOOP-15916.  This ticket is opened to fix this in HDDS-4 or 
> trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-880) Create api for ACL handling in Ozone

2018-11-29 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-880:

Attachment: HDDS-880.02.patch

> Create api for ACL handling in Ozone
> 
>
> Key: HDDS-880
> URL: https://issues.apache.org/jira/browse/HDDS-880
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDDS-880.00.patch, HDDS-880.01.patch, HDDS-880.02.patch
>
>
> Create api for ACL handling in Ozone,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-880) Create api for ACL handling in Ozone

2018-11-29 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703671#comment-16703671
 ] 

Jitendra Nath Pandey commented on HDDS-880:
---

Let's add default implementation for setAcl and removeAcl, in the interface 
itself. For now, we can throw unsupported exception.

> Create api for ACL handling in Ozone
> 
>
> Key: HDDS-880
> URL: https://issues.apache.org/jira/browse/HDDS-880
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDDS-880.00.patch, HDDS-880.01.patch
>
>
> Create api for ACL handling in Ozone,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-880) Create api for ACL handling in Ozone

2018-11-29 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703679#comment-16703679
 ] 

Ajay Kumar commented on HDDS-880:
-

[~jnp] thanks for review. Added \{{UnsupportedOperationException}} to  setAcl 
and removeAcl in pacth v3. Also added license header to \{{RequestContext}}.

> Create api for ACL handling in Ozone
> 
>
> Key: HDDS-880
> URL: https://issues.apache.org/jira/browse/HDDS-880
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDDS-880.00.patch, HDDS-880.01.patch, HDDS-880.02.patch
>
>
> Create api for ACL handling in Ozone,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14112) Avoid recursive call to external authorizer for getContentSummary.

2018-11-29 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703670#comment-16703670
 ] 

Tsz Wo Nicholas Sze commented on HDFS-14112:


h14112_20181129.patch: adds the new conf to hdfs-default.xml


> Avoid recursive call to external authorizer for getContentSummary.
> --
>
> Key: HDFS-14112
> URL: https://issues.apache.org/jira/browse/HDFS-14112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
>Priority: Critical
> Attachments: h14112_20181128.patch, h14112_20181129.patch
>
>
> HDFS-12130 optimizes permission check, and invokes permission checker 
> recursively for each component of the tree, which works well for FSPermission 
> checker.
> But for certain external authorizers it may be more efficient to make one 
> call with {{subaccess}}, because often they don't have to evaluate for each 
> and every component of the path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-11-29 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703676#comment-16703676
 ] 

Jitendra Nath Pandey commented on HDDS-870:
---

It seems error prone to create a bufferList in {{ChunkGroupOutputStream}} and 
share it in various {{ChunkOutputStreams}} within. The two streams may start 
working on same buffer?

>From the patch, it seems it would be easy to move bufferList creation to 
>{{ChunkOutputStreams}}, and each gets its own bufferList. Is there a downside 
>of this?

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14095) EC: Track Erasure Coding commands in DFS statistics

2018-11-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703674#comment-16703674
 ] 

Hudson commented on HDFS-14095:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15530 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15530/])
HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. (brahma: rev 
f534736867eed962899615ca1b7eb68bcf591d17)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java


> EC: Track Erasure Coding commands in DFS statistics
> ---
>
> Key: HDFS-14095
> URL: https://issues.apache.org/jira/browse/HDFS-14095
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14095-01.patch, HDFS-14095-02.patch, 
> HDFS-14095-03.patch, HDFS-14095-04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14112) Avoid recursive call to external authorizer for getContentSummary.

2018-11-29 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-14112:
---
Attachment: h14112_20181129.patch

> Avoid recursive call to external authorizer for getContentSummary.
> --
>
> Key: HDFS-14112
> URL: https://issues.apache.org/jira/browse/HDFS-14112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
>Priority: Critical
> Attachments: h14112_20181128.patch, h14112_20181129.patch
>
>
> HDFS-12130 optimizes permission check, and invokes permission checker 
> recursively for each component of the tree, which works well for FSPermission 
> checker.
> But for certain external authorizers it may be more efficient to make one 
> call with {{subaccess}}, because often they don't have to evaluate for each 
> and every component of the path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13358) RBF: Support for Delegation Token (RPC)

2018-11-29 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703669#comment-16703669
 ] 

Brahma Reddy Battula commented on HDFS-13358:
-

[~crh] Thanks for working on this.

At first glance, dnt we need to overide the following properties.? As KMS or 
any other process can use same.?
{code:java}
private static final String ZK_DTSM_NAMESPACE = "ZKDTSMRoot";
private static final String ZK_DTSM_SEQNUM_ROOT = "/ZKDTSMSeqNumRoot";
private static final String ZK_DTSM_KEYID_ROOT = "/ZKDTSMKeyIdRoot";
private static final String ZK_DTSM_TOKENS_ROOT = "/ZKDTSMTokensRoot";
private static final String ZK_DTSM_MASTER_KEY_ROOT = "/ZKDTSMMasterKeyRoot";
{code}

Apart from the above approach lgtm, will dig some more here.

[~daryn] if you get chance, can you help to review..? 

> RBF: Support for Delegation Token (RPC)
> ---
>
> Key: HDFS-13358
> URL: https://issues.apache.org/jira/browse/HDFS-13358
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Sherwood Zheng
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13358-HDFS-13891.001.patch, 
> HDFS-13358-HDFS-13891.002.patch, HDFS-13358-HDFS-13891.003.patch, RBF_ 
> Delegation token design.pdf
>
>
> HDFS Router should support issuing / managing HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14006) Refactor name node to allow different token verification implementations

2018-11-29 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703656#comment-16703656
 ] 

Brahma Reddy Battula commented on HDFS-14006:
-

[~crh] Can you address [~elgoiri] comments.?

> Refactor name node to allow different token verification implementations
> 
>
> Key: HDFS-14006
> URL: https://issues.apache.org/jira/browse/HDFS-14006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14006.001.patch, HDFS-14006.002.patch
>
>
> Router currently uses Namenode web resources to read and verify delegation 
> tokens. This model doesn't work when router will be deployed in secured mode. 
> This change will introduce router's own UserProvider resource and 
> dependencies.
> In the current deployment one can see this exception.
> {"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
>  cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}
> In the proposed change, router will maintain its own web resource, that will 
> be similar to current namenode, but modified to get back a router instance 
> instead of namenode.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14095) EC: Track Erasure Coding commands in DFS statistics

2018-11-29 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14095:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.1
   3.3.0
   3.1.2
   3.0.4
   Status: Resolved  (was: Patch Available)

Committed to trunk through branch-3.0.. [~ayushtkn] thanks for contribution.

> EC: Track Erasure Coding commands in DFS statistics
> ---
>
> Key: HDFS-14095
> URL: https://issues.apache.org/jira/browse/HDFS-14095
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14095-01.patch, HDFS-14095-02.patch, 
> HDFS-14095-03.patch, HDFS-14095-04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2018-11-29 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14081:
--
Attachment: HDFS-14081.003.patch

> hdfs dfsadmin -metasave metasave_test results NPE
> -
>
> Key: HDFS-14081
> URL: https://issues.apache.org/jira/browse/HDFS-14081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14081.001.patch, HDFS-14081.002.patch, 
> HDFS-14081.003.patch
>
>
> Race condition is encountered while adding Block to 
> postponedMisreplicatedBlocks which in turn tried to retrieve Block from 
> BlockManager in which it may not be present. 
> This happens in HA, metasave in first NN succeeded but failed in second NN, 
> StackTrace showing NPE is as follows:
> {code}
> 2018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:602342018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: 
> IPC Server handler 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:60234java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseSourceDatanodes(BlockManager.java:2175)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.dumpBlockMeta(BlockManager.java:830)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.metaSave(BlockManager.java:762)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1782)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1766)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.metaSave(NameNodeRpcServer.java:1320)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.metaSave(ClientNamenodeProtocolServerSideTranslatorPB.java:928)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2018-11-29 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703649#comment-16703649
 ] 

Shweta edited comment on HDFS-14081 at 11/29/18 6:54 PM:
-

Thanks for the review [~knanasi]. To keep the logging format consistent with 
the way other statements are written in the method and to accommodate your 
suggestion, I have added the escape next line character "\n" to the out.print 
statement. 

Updated with the above change and uploaded patch 3. Please review.


was (Author: shwetayakkali):
Thanks for the review [~knanasi]. To keep the logging format consistent with 
the way other statements are written in the method and to accommodate your 
suggestion I have added the escape next line character "\n" to the out.print 
statement. 

Updated with the above change and uploaded patch 3. Please review.

> hdfs dfsadmin -metasave metasave_test results NPE
> -
>
> Key: HDFS-14081
> URL: https://issues.apache.org/jira/browse/HDFS-14081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14081.001.patch, HDFS-14081.002.patch
>
>
> Race condition is encountered while adding Block to 
> postponedMisreplicatedBlocks which in turn tried to retrieve Block from 
> BlockManager in which it may not be present. 
> This happens in HA, metasave in first NN succeeded but failed in second NN, 
> StackTrace showing NPE is as follows:
> {code}
> 2018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:602342018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: 
> IPC Server handler 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:60234java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseSourceDatanodes(BlockManager.java:2175)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.dumpBlockMeta(BlockManager.java:830)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.metaSave(BlockManager.java:762)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1782)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1766)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.metaSave(NameNodeRpcServer.java:1320)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.metaSave(ClientNamenodeProtocolServerSideTranslatorPB.java:928)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >