[jira] [Updated] (HDDS-1401) Key Read fails with Unable to find the block, after reducing the size of container cache

2019-04-07 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1401:

Labels: MiniOzoneChaosCluster  (was: )

> Key Read fails with Unable to find the block, after reducing the size of 
> container cache
> 
>
> Key: HDDS-1401
> URL: https://issues.apache.org/jira/browse/HDDS-1401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: MiniOzoneChaosCluster
>
> Key Read fails with Unable to find the block NO_SUCH_BLOCK after reducing the 
> value of OZONE_CONTAINER_CACHE_SIZE.
> The reads are tried on the other datanodes but it failed on all the 3 
> datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1401) Key Read fails with Unable to find the block, after reducing the size of container cache

2019-04-07 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1401:
---
Priority: Blocker  (was: Major)

> Key Read fails with Unable to find the block, after reducing the size of 
> container cache
> 
>
> Key: HDDS-1401
> URL: https://issues.apache.org/jira/browse/HDDS-1401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Blocker
>
> Key Read fails with Unable to find the block NO_SUCH_BLOCK after reducing the 
> value of OZONE_CONTAINER_CACHE_SIZE.
> The reads are tried on the other datanodes but it failed on all the 3 
> datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1401) Key Read fails with Unable to find the block, after reducing the size of container cache

2019-04-07 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDDS-1401:
-

Assignee: Shashikant Banerjee

> Key Read fails with Unable to find the block, after reducing the size of 
> container cache
> 
>
> Key: HDDS-1401
> URL: https://issues.apache.org/jira/browse/HDDS-1401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>
> Key Read fails with Unable to find the block NO_SUCH_BLOCK after reducing the 
> value of OZONE_CONTAINER_CACHE_SIZE.
> The reads are tried on the other datanodes but it failed on all the 3 
> datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1401) Key Read fails with Unable to find the block, after reducing the size of container cache

2019-04-07 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812099#comment-16812099
 ] 

Mukul Kumar Singh commented on HDDS-1401:
-

After adding some debug tracing, it is found that after eviction

{code}
2019-04-07 23:06:55,698 INFO  utils.ContainerCache 
(ContainerCache.java:removeLRU(121)) - evicting db file:12 
db:org.apache.hadoop.utils.RocksDBStore@71a7e0b4
2019-04-07 23:06:55,698 INFO  utils.ContainerCache 
(ContainerCache.java:removeLRU(131)) - file:12 block written is 
101886106270695437
2019-04-07 23:06:55,698 INFO  utils.ContainerCache 
(ContainerCache.java:removeLRU(131)) - file:12 block written is 
101886106270826525
2019-04-07 23:06:55,699 INFO  utils.ContainerCache 
(ContainerCache.java:removeLRU(131)) - file:12 block written is 
101886106270892116
2019-04-07 23:06:55,699 INFO  utils.ContainerCache 
(ContainerCache.java:removeLRU(131)) - file:12 block written is 
101886107014857243
2019-04-07 23:06:55,699 INFO  utils.ContainerCache 
(ContainerCache.java:removeLRU(128)) - file:12 evict written is bcs key
2019-04-07 23:06:55,699 INFO  utils.ContainerCache 
(ContainerCache.java:closeDB(81)) - closing db file:12
2019-04-07 23:06:55,700 INFO  utils.ContainerCache 
(ContainerCache.java:closeDB(85)) - closed db file:12
{code}

{code}
2019-04-07 23:06:56,530 INFO  utils.ContainerCache 
(ContainerCache.java:getDB(174)) - loading db file:12 
db:org.apache.hadoop.utils.RocksDBStore@18228e23
2019-04-07 23:06:56,530 INFO  utils.ContainerCache 
(ContainerCache.java:getDB(182)) - file:12 block written is:101886106270695437 
2019-04-07 23:06:56,530 INFO  utils.ContainerCache 
(ContainerCache.java:getDB(182)) - file:12 block written is:101886106270892116 
2019-04-07 23:06:56,530 INFO  utils.ContainerCache 
(ContainerCache.java:getDB(179)) - file:12 evict written is bcs key
{code}

> Key Read fails with Unable to find the block, after reducing the size of 
> container cache
> 
>
> Key: HDDS-1401
> URL: https://issues.apache.org/jira/browse/HDDS-1401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>
> Key Read fails with Unable to find the block NO_SUCH_BLOCK after reducing the 
> value of OZONE_CONTAINER_CACHE_SIZE.
> The reads are tried on the other datanodes but it failed on all the 3 
> datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14369) RBF: Fix trailing "/" for webhdfs

2019-04-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812064#comment-16812064
 ] 

Hadoop QA commented on HDFS-14369:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
12s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965142/HDFS-14369-HDFS-13891.006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 09d5ef2ab57a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 007b8ea |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26593/testReport/ |
| Max. process+thread count | 1359 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26593/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Fix trailing "/" for webhdfs
> -
>
> Key: HDFS-14369
> 

[jira] [Commented] (HDFS-14369) RBF: Fix trailing "/" for webhdfs

2019-04-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812017#comment-16812017
 ] 

Akira Ajisaka commented on HDFS-14369:
--

Thanks [~elgoiri]. Updated the patch to reflect the review comment.

> RBF: Fix trailing "/" for webhdfs
> -
>
> Key: HDFS-14369
> URL: https://issues.apache.org/jira/browse/HDFS-14369
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14369-HDFS-13891-regressiontest-001.patch, 
> HDFS-14369-HDFS-13891.001.patch, HDFS-14369-HDFS-13891.002.patch, 
> HDFS-14369-HDFS-13891.003.patch, HDFS-14369-HDFS-13891.004.patch, 
> HDFS-14369-HDFS-13891.005.patch, HDFS-14369-HDFS-13891.006.patch
>
>
> WebHDFS doesn't trim trailing slash causing discrepancy in operations.
> Example below
> --
> Using HDFS API, two directory are listed.
> {code}
> $ hdfs dfs -ls hdfs://:/tmp/
> Found 2 items
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp1
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp2
> {code}
> Using WebHDFS API, only one directory is listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp/?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16387,"group":"supergroup","length":0,"modificationTime":1552016766769,"owner":"hdfs","pathSuffix":"tmp1","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
> ]}}
> {code}
> The mount table is as follows:
> {code}
> $ hdfs dfsrouteradmin -ls /tmp
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /tmp  ns1->/tmp aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp1 ns1->/tmp/tmp1aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp2 ns2->/tmp/tmp2aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {code}
> Without trailing thrash, two directories are listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":1541753421917,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753421917,"owner":"hdfs","pathSuffix":"tmp1","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"},
> {"accessTime":1541753429812,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753429812,"owner":"hdfs","pathSuffix":"tmp2","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"}
> ]}}
> {code}
> [~ajisakaa] Thanks for reporting this, I borrowed the text from 
> HDFS-13972



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14369) RBF: Fix trailing "/" for webhdfs

2019-04-07 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14369:
-
Attachment: HDFS-14369-HDFS-13891.006.patch

> RBF: Fix trailing "/" for webhdfs
> -
>
> Key: HDFS-14369
> URL: https://issues.apache.org/jira/browse/HDFS-14369
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14369-HDFS-13891-regressiontest-001.patch, 
> HDFS-14369-HDFS-13891.001.patch, HDFS-14369-HDFS-13891.002.patch, 
> HDFS-14369-HDFS-13891.003.patch, HDFS-14369-HDFS-13891.004.patch, 
> HDFS-14369-HDFS-13891.005.patch, HDFS-14369-HDFS-13891.006.patch
>
>
> WebHDFS doesn't trim trailing slash causing discrepancy in operations.
> Example below
> --
> Using HDFS API, two directory are listed.
> {code}
> $ hdfs dfs -ls hdfs://:/tmp/
> Found 2 items
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp1
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp2
> {code}
> Using WebHDFS API, only one directory is listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp/?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16387,"group":"supergroup","length":0,"modificationTime":1552016766769,"owner":"hdfs","pathSuffix":"tmp1","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
> ]}}
> {code}
> The mount table is as follows:
> {code}
> $ hdfs dfsrouteradmin -ls /tmp
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /tmp  ns1->/tmp aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp1 ns1->/tmp/tmp1aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp2 ns2->/tmp/tmp2aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {code}
> Without trailing thrash, two directories are listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":1541753421917,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753421917,"owner":"hdfs","pathSuffix":"tmp1","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"},
> {"accessTime":1541753429812,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753429812,"owner":"hdfs","pathSuffix":"tmp2","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"}
> ]}}
> {code}
> [~ajisakaa] Thanks for reporting this, I borrowed the text from 
> HDFS-13972



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-04-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=224155=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-224155
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 07/Apr/19 19:14
Start Date: 07/Apr/19 19:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #648: HDDS-1340. Add 
List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#issuecomment-480620564
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1037 | trunk passed |
   | +1 | compile | 54 | trunk passed |
   | +1 | checkstyle | 17 | trunk passed |
   | +1 | mvnsite | 30 | trunk passed |
   | +1 | shadedclient | 635 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 36 | trunk passed |
   | +1 | javadoc | 19 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 22 | the patch passed |
   | +1 | javac | 22 | the patch passed |
   | +1 | checkstyle | 10 | the patch passed |
   | +1 | mvnsite | 25 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 684 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 42 | the patch passed |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 35 | ozone-recon in the patch passed. |
   | +1 | asflicense | 23 | The patch does not generate ASF License warnings. |
   | | | 2818 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/648 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux acba8d4e702d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec143cb |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/8/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/8/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 224155)
Time Spent: 4h 20m  (was: 4h 10m)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2019-04-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16811939#comment-16811939
 ] 

Íñigo Goiri commented on HDFS-13248:


Thanks [~hexiaoqiao] for  [^RBF Data Locality Design.pdf].

My main concern with modifying {{ClientProtocol}} is that it requires the 
client itself to change.
The change is backwards compatible but for it to work you need the client to be 
up to date.
>From our experience, this is pretty challenging.
WebHDFS is another example; clients would need to pass the new parameter for it 
to work.
In addition, compatibility happens at the expense of duplicating methods for 
just one parameter.

The current approach for locality is to use {{Server#getRemoteAddress()}} for 
RPC and {{JspHelper#getRemoteAddr()}} for HTTP (this is the most case with 
reads with {{getBlockLocations()}}).
For some of them it also combines this with a parameter {{clientName}}.
I think the best approach is to extend the RPC framework and modify the 
Namenode and the Router to leverage this.
Instead of {{hostname}}, I would call it {{proxyHostname}} or 
{{clientHostname}}.
In any case, I'm fine with extending the protocol to add the new field, it 
should be fairly easy to cover all the compatibility cases.
I'd like to go deeper on what the security risks are here.

BTW, we could do right away the one that 
{{RouterRpcServer#getBlockLocations()}} reorders the destinations.

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> HDFS-13248.002.patch, HDFS-13248.003.patch, HDFS-13248.004.patch, 
> HDFS-13248.005.patch, HDFS-Router-Data-Locality.odt, RBF Data Locality 
> Design.pdf, clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14384) When lastLocatedBlock token expire, it will take 1~3s second to refetch it.

2019-04-07 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16811923#comment-16811923
 ] 

Surendra Singh Lilhore commented on HDFS-14384:
---

Attached initial patch. Please review.

> When lastLocatedBlock token expire, it will take 1~3s second to refetch it.
> ---
>
> Key: HDFS-14384
> URL: https://issues.apache.org/jira/browse/HDFS-14384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14384.001.patch
>
>
> Scenario :
>  1. Write file with one block which is in-progress.
>   2. Open input stream and close the output stream.
>   3. Wait for block token expiration and read the data.
>   4. Last block read take 1~3 sec to read it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14384) When lastLocatedBlock token expire, it will take 1~3s second to refetch it.

2019-04-07 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14384:
--
Attachment: HDFS-14384.001.patch

> When lastLocatedBlock token expire, it will take 1~3s second to refetch it.
> ---
>
> Key: HDFS-14384
> URL: https://issues.apache.org/jira/browse/HDFS-14384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14384.001.patch
>
>
> Scenario :
>  1. Write file with one block which is in-progress.
>   2. Open input stream and close the output stream.
>   3. Wait for block token expiration and read the data.
>   4. Last block read take 1~3 sec to read it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14403) Cost-Based RPC FairCallQueue

2019-04-07 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16811877#comment-16811877
 ] 

star commented on HDFS-14403:
-

[~xkrogen], Impressive results.

Can you give a detailed information about how much listStatus on a directory 
with one subdirectories and listStatus on a directory with 1000 subdirectories 
in your benckmark tests? Is it possible that scheduler with LockCostProvider 
schedules more low cost operations like listStatus on a directory with one 
subdirectories, which results in a lower queue time. Or it makes sense for 
LockCostProvider. 

> Cost-Based RPC FairCallQueue
> 
>
> Key: HDFS-14403
> URL: https://issues.apache.org/jira/browse/HDFS-14403
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc, namenode
>Reporter: Erik Krogen
>Assignee: Christopher Gregorian
>Priority: Major
>  Labels: qos, rpc
> Attachments: CostBasedFairCallQueueDesign_v0.pdf, HDFS-14403.001.patch
>
>
> HADOOP-15016 initially described extensions to the Hadoop FairCallQueue 
> encompassing both cost-based analysis of incoming RPCs, as well as support 
> for reservations of RPC capacity for system/platform users. This JIRA intends 
> to track the former, as HADOOP-15016 was repurposed to more specifically 
> focus on the reservation portion of the work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1401) Key Read fails with Unable to find the block, after reducing the size of container cache

2019-04-07 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1401:
---

 Summary: Key Read fails with Unable to find the block, after 
reducing the size of container cache
 Key: HDDS-1401
 URL: https://issues.apache.org/jira/browse/HDDS-1401
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.3.0
Reporter: Mukul Kumar Singh


Key Read fails with Unable to find the block NO_SUCH_BLOCK after reducing the 
value of OZONE_CONTAINER_CACHE_SIZE.

The reads are tried on the other datanodes but it failed on all the 3 datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org