[jira] [Updated] (HDFS-13854) RBF: The ProcessingAvgTime and ProxyAvgTime should display by JMX with ms unit.

2018-08-22 Thread yanghuafeng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yanghuafeng updated HDFS-13854:
---
Summary: RBF: The ProcessingAvgTime and ProxyAvgTime should display by JMX 
with ms unit.  (was: RBF: The ProcessingAvgTime and ProxyAvgTime should display 
by JMX with ms units.)

> RBF: The ProcessingAvgTime and ProxyAvgTime should display by JMX with ms 
> unit.
> ---
>
> Key: HDFS-13854
> URL: https://issues.apache.org/jira/browse/HDFS-13854
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
>
> In the FederationRPCMetrics, proxy time and processing time should be exposed 
> to the jmx or ganglia with ms units. Although the method toMS() exists, we 
> cannot get the correct proxy time and processing time by jmx and ganglia.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13854) RBF: The ProcessingAvgTime and ProxyAvgTime should display by JMX with ms units.

2018-08-22 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13854:
--

 Summary: RBF: The ProcessingAvgTime and ProxyAvgTime should 
display by JMX with ms units.
 Key: HDFS-13854
 URL: https://issues.apache.org/jira/browse/HDFS-13854
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation, hdfs
Affects Versions: 3.1.0, 3.0.0, 2.9.0
Reporter: yanghuafeng
Assignee: yanghuafeng


In the FederationRPCMetrics, proxy time and processing time should be exposed 
to the jmx or ganglia with ms units. Although the method toMS() exists, we 
cannot get the correct proxy time and processing time by jmx and ganglia.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) RBF: UpdateMountTableEntryRequest isn't validating the record.

2018-08-22 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Description: 
In RBF when we try to update the existing mount entry by using the add command 
its creating a mount entry without performing the validation check / on the 
destination path.

command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the mount 
table)

Now when we use the below command on the same mount entry. 

Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not performing 
the validation check for the second time).

 

  was:
In RBF when we try to update the existing mount entry by using the add command 
its creating a mount entry by taking -order as target path.

command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the mount 
table)

Now when we use the below command on the same mount entry. 

Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not performing 
the validation check for the second time).

 

Summary:  RBF: UpdateMountTableEntryRequest isn't validating the 
record.  (was:  RBF: validation check was not done for adding the multiple 
destination to an existing mount entry.)

>  RBF: UpdateMountTableEntryRequest isn't validating the record.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0, 2.9.1
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810-002.patch, HDFS-13810.patch
>
>
> In RBF when we try to update the existing mount entry by using the add 
> command its creating a mount entry without performing the validation check / 
> on the destination path.
> command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the 
> mount table)
> Now when we use the below command on the same mount entry. 
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not 
> performing the validation check for the second time).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13848) Refactor NameNode failover proxy providers

2018-08-22 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589712#comment-16589712
 ] 

Konstantin Shvachko commented on HDFS-13848:


Took care of the white spaces and some of the checkstyle. Cannot fix all 
checkstye warnings, because of how these classes were originally written.

> Refactor NameNode failover proxy providers
> --
>
> Key: HDFS-13848
> URL: https://issues.apache.org/jira/browse/HDFS-13848
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, hdfs-client
>Affects Versions: 2.7.5
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13848.002.patch, HDFS-13848.patch
>
>
> Looking at NN failover proxy providers in the context of HDFS-13782 I noticed 
> that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have 
> a lot of common logic. We can move this common logic into 
> {{AbstractNNFailoverProxyProvider}}, which simplifies things a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13848) Refactor NameNode failover proxy providers

2018-08-22 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13848:
---
Attachment: HDFS-13848.002.patch

> Refactor NameNode failover proxy providers
> --
>
> Key: HDFS-13848
> URL: https://issues.apache.org/jira/browse/HDFS-13848
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, hdfs-client
>Affects Versions: 2.7.5
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13848.002.patch, HDFS-13848.patch
>
>
> Looking at NN failover proxy providers in the context of HDFS-13782 I noticed 
> that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have 
> a lot of common logic. We can move this common logic into 
> {{AbstractNNFailoverProxyProvider}}, which simplifies things a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13853) RouterAdmin update cmd is overwriting the entry not updating the existing

2018-08-22 Thread Dibyendu Karmakar (JIRA)
Dibyendu Karmakar created HDFS-13853:


 Summary: RouterAdmin update cmd is overwriting the entry not 
updating the existing
 Key: HDFS-13853
 URL: https://issues.apache.org/jira/browse/HDFS-13853
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Dibyendu Karmakar
Assignee: Dibyendu Karmakar


{code:java}
// Create a new entry
Map destMap = new LinkedHashMap<>();
for (String ns : nss) {
  destMap.put(ns, dest);
}
MountTable newEntry = MountTable.newInstance(mount, destMap);
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2018-08-22 Thread Dibyendu Karmakar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13853:
-
Summary: RBF: RouterAdmin update cmd is overwriting the entry not updating 
the existing  (was: RouterAdmin update cmd is overwriting the entry not 
updating the existing)

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13810) RBF: validation check was not done for adding the multiple destination to an existing mount entry.

2018-08-22 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589691#comment-16589691
 ] 

venkata ram kumar ch commented on HDFS-13810:
-

Thanks [~elgoiri] ,

Yes the description was little confusing . I updated the description with clear 
details.

>  RBF: validation check was not done for adding the multiple destination to an 
> existing mount entry.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0, 2.9.1
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810-002.patch, HDFS-13810.patch
>
>
> In RBF when we try to update the existing mount entry by using the add 
> command its creating a mount entry by taking -order as target path.
> command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the 
> mount table)
> Now when we use the below command on the same mount entry. 
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not 
> performing the validation check for the second time).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) RBF: validation check was not done for adding the multiple destination to an existing mount entry.

2018-08-22 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Description: 
In RBF when we try to update the existing mount entry by using the add command 
its creating a mount entry by taking -order as target path.

command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the mount 
table)

Now when we use the below command on the same mount entry. 

Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not performing 
the validation check for the second time).

 

  was:
In Router based federation when we  try to add the mount entry without having 
the destination path, its getting  added into the mount table by taking the 
other parameters order as destination path.

Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 

its creating a mount entry by taking -order as target path

Summary:  RBF: validation check was not done for adding the multiple 
destination to an existing mount entry.  (was:  RBF: Adding the mount entry 
without having the destination path, its getting  added into the mount table by 
taking the other parameters order as destination path.)

>  RBF: validation check was not done for adding the multiple destination to an 
> existing mount entry.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0, 2.9.1
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810-002.patch, HDFS-13810.patch
>
>
> In RBF when we try to update the existing mount entry by using the add 
> command its creating a mount entry by taking -order as target path.
> command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the 
> mount table)
> Now when we use the below command on the same mount entry. 
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not 
> performing the validation check for the second time).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-359) RocksDB Profiles support

2018-08-22 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589686#comment-16589686
 ] 

Xiaoyu Yao commented on HDDS-359:
-

[~anu], can you rebase the patch as it does not apply any more. Thanks!

> RocksDB Profiles support
> 
>
> Key: HDDS-359
> URL: https://issues.apache.org/jira/browse/HDDS-359
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-359.001.patch
>
>
> This allows us to tune the OM/SCM DB for different machine configurations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13836) RBF: To handle the exception when the mounttable znode have null value.

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589683#comment-16589683
 ] 

genericqa commented on HDFS-13836:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
17s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13836 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936752/HDFS-13836.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d40f4335b63c 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b021249 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24845/testReport/ |
| Max. process+thread count | 1343 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24845/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: To handle the exception when the mounttable znode have null value.
> ---
>
> 

[jira] [Commented] (HDFS-13802) RBF: Remove FSCK from Router Web UI, because fsck is not supported currently

2018-08-22 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589673#comment-16589673
 ] 

Fei Hui commented on HDFS-13802:


[~elgoiri][~linyiqun] Are you sure we should support FSCK on WebUI? In 
thousands cluster the FSCK will take a long time(One hour or more), maybe users 
could not wait for the fsck result on WebUI? Users often run fsck in the 
background.

> RBF: Remove FSCK from Router Web UI, because fsck is not supported currently
> 
>
> Key: HDFS-13802
> URL: https://issues.apache.org/jira/browse/HDFS-13802
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-13802.001.patch, HDFS-13802.002.patch
>
>
> When i click FSCK on Router Web UI Utilities, i got errors
> {quote}
> HTTP ERROR 404
> Problem accessing /fsck. Reason:
> NOT_FOUND
> Powered by Jetty://
> {quote}
> I deep into the source code and find that fsck is not supported currently, So 
> i think we should remove FSCK from Router Web UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589660#comment-16589660
 ] 

genericqa commented on HDFS-13695:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 204 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 52s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 42 new + 428 unchanged 
- 102 fixed = 470 total (was 530) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 179 new + 6455 unchanged - 69 fixed = 6634 total (was 6524) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13695 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936724/HDFS-13695.v6.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4d439f5243d4 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / af4b705 

[jira] [Updated] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline

2018-08-22 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-227:
-
Status: Patch Available  (was: Open)

> Use Grpc as the default transport protocol for Standalone pipeline
> --
>
> Key: HDDS-227
> URL: https://issues.apache.org/jira/browse/HDDS-227
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-227.001.patch, HDDS-227.002.patch
>
>
> Using a config, Standalone pipeline can currently choose between Grpc and 
> Netty based transport protocol, this jira proposes to use only grpc as the 
> transport protocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline

2018-08-22 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-227:
-
Status: Open  (was: Patch Available)

> Use Grpc as the default transport protocol for Standalone pipeline
> --
>
> Key: HDDS-227
> URL: https://issues.apache.org/jira/browse/HDDS-227
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-227.001.patch, HDDS-227.002.patch
>
>
> Using a config, Standalone pipeline can currently choose between Grpc and 
> Netty based transport protocol, this jira proposes to use only grpc as the 
> transport protocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589653#comment-16589653
 ] 

genericqa commented on HDDS-227:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 48s{color} | {color:orange} root: The patch generated 9 new + 5 unchanged - 
0 fixed = 14 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 46s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.freon.TestDataValidate |
|   | 

[jira] [Assigned] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-08-22 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HDFS-13768:


Assignee: Ranith Sardar

>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Ranith Sardar
>Priority: Major
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)
> {noformat}
> One improvement maybe we can use ForkJoinPool to do this recursive task, 
> rather than a sync way. This will be a great improvement because it can 
> greatly speed up recovery process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13844) Refactor the fmt_bytes function in the dfs-dust.js.

2018-08-22 Thread yanghuafeng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589641#comment-16589641
 ] 

yanghuafeng commented on HDFS-13844:


Could you review the code? [~elgoiri] 

> Refactor the fmt_bytes function in the dfs-dust.js.
> ---
>
> Key: HDFS-13844
> URL: https://issues.apache.org/jira/browse/HDFS-13844
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, ui
>Affects Versions: 1.2.0, 2.2.0, 2.7.2, 3.0.0, 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Minor
> Attachments: HDFS-13844.001.patch, overflow_undefined_unit.jpg, 
> overflow_unit.jpg, undefined_unit.jpg
>
>
> The namenode WebUI cannot display the capacity with correct units. I have 
> found that the function fmt_bytes in the dfs-dust.js missed the EB unit. This 
> will lead to undefined unit in the ui.
> And although the unit ZB is very large, we should take the unit overflow into 
> consideration. Supposing the last unit is GB, we should get the 8192 GB with 
> the total capacity 8T rather than 8 undefined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13836) RBF: To handle the exception when the mounttable znode have null value.

2018-08-22 Thread yanghuafeng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589640#comment-16589640
 ] 

yanghuafeng commented on HDFS-13836:


In my opinion, the above code would be more simple. To be more readable is also 
acceptable. Please review again, [~elgoiri]

> RBF: To handle the exception when the mounttable znode have null value.
> ---
>
> Key: HDFS-13836
> URL: https://issues.apache.org/jira/browse/HDFS-13836
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.9.0, 3.0.0, 3.1.0, 3.2.0
>
> Attachments: HDFS-13836.001.patch, HDFS-13836.002.patch, 
> HDFS-13836.003.patch
>
>
> When we are adding the mounttable entry, the router sever is terminated. 
> Some error messages show in log, as follow:
>  2018-08-20 14:18:32,404 ERROR 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl:
>  Cannot get data for 0SLASH0testzk: null. 
> The reason is that router server have created the znode but not to set data 
> before being terminated. But the method zkManager.getStringData(path, stat) 
> will throw NPE if the path has null value in the StateStoreZooKeeperImpl, 
> leading to fail in adding the same mounttable entry and deleting the existing 
> znode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13836) RBF: To handle the exception when the mounttable znode have null value.

2018-08-22 Thread yanghuafeng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yanghuafeng updated HDFS-13836:
---
Attachment: HDFS-13836.003.patch

> RBF: To handle the exception when the mounttable znode have null value.
> ---
>
> Key: HDFS-13836
> URL: https://issues.apache.org/jira/browse/HDFS-13836
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.9.0, 3.0.0, 3.1.0, 3.2.0
>
> Attachments: HDFS-13836.001.patch, HDFS-13836.002.patch, 
> HDFS-13836.003.patch
>
>
> When we are adding the mounttable entry, the router sever is terminated. 
> Some error messages show in log, as follow:
>  2018-08-20 14:18:32,404 ERROR 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl:
>  Cannot get data for 0SLASH0testzk: null. 
> The reason is that router server have created the znode but not to set data 
> before being terminated. But the method zkManager.getStringData(path, stat) 
> will throw NPE if the path has null value in the StateStoreZooKeeperImpl, 
> leading to fail in adding the same mounttable entry and deleting the existing 
> znode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13848) Refactor NameNode failover proxy providers

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589635#comment-16589635
 ] 

genericqa commented on HDFS-13848:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 51s{color} | {color:orange} root: The patch generated 9 new + 7 unchanged - 
13 fixed = 16 total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 14s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskChecker |
|   | hadoop.util.TestReadWriteDiskValidator |
|   | hadoop.util.TestDiskCheckerWithDiskIo |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13848 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936726/HDFS-13848.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e19e123a0dee 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 

[jira] [Updated] (HDFS-13805) Journal Nodes should allow to format non-empty directories with "-force" option

2018-08-22 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-13805:
--
Attachment: HDFS-13805.004.patch

> Journal Nodes should allow to format non-empty directories with "-force" 
> option
> ---
>
> Key: HDFS-13805
> URL: https://issues.apache.org/jira/browse/HDFS-13805
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0-alpha4
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13805.001.patch, HDFS-13805.002.patch, 
> HDFS-13805.003.patch, HDFS-13805.004.patch
>
>
> HDFS-2 completely restricted to re-format journalnode, but it should be 
> allowed when *"-force"* option is given. If user feel force option can 
> accidentally delete the data then he can disable it by configuring 
> "*dfs.reformat.disabled*"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-08-22 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13852:
--

 Summary: RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE 
should be configured in RBFConfigKeys.
 Key: HDFS-13852
 URL: https://issues.apache.org/jira/browse/HDFS-13852
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation, hdfs
Affects Versions: 3.0.1, 2.9.1, 3.1.0
Reporter: yanghuafeng
Assignee: yanghuafeng


In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
periodically. And we can set the dfs.federation.router.dn-report.time-out and 
dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
start the router, the FederationMetrics will also invoke the method to get node 
usage. If time out error happened, we cannot adjust the parameter time_out. And 
the time_out in the FederationMetrics and NamenodeBeanMetrics should be the 
same.



 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589614#comment-16589614
 ] 

Hudson commented on HDDS-356:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14817 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14817/])
HDDS-356. Support ColumnFamily based RockDBStore and TableStore. (aengineer: 
rev b021249ac84abe31c9d30d73ed483bea2acdbaab)
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TableIterator.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestRDBTableStore.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBStore.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/package-info.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/package-info.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBStoreIterator.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBTable.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestRDBStore.java
* (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
* (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Table.java


> Support ColumnFamily based RockDBStore and TableStore
> -
>
> Key: HDDS-356
> URL: https://issues.apache.org/jira/browse/HDDS-356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-356.001.patch, HDDS-356.002.patch, 
> HDDS-356.003.patch
>
>
> This is to minimize the performance impacts of the expensive RocksDB table 
> scan problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-22 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589586#comment-16589586
 ] 

Junjie Chen commented on HDDS-317:
--

Thanks [~xyao] and [~anu], will update in next patch.

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13851) Remove AlignmentContext from AbstractNNFailoverProxyProvider

2018-08-22 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13851:
---
Status: Patch Available  (was: Open)

Here is a simple patch.

> Remove AlignmentContext from AbstractNNFailoverProxyProvider
> 
>
> Key: HDFS-13851
> URL: https://issues.apache.org/jira/browse/HDFS-13851
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13851-HDFS-12943.001.patch
>
>
> {{AlignmentContext}} is now a part of {{ObserverReadProxyProvider}}, we can 
> remove it from the base class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589584#comment-16589584
 ] 

Anu Engineer commented on HDDS-317:
---

Plus, there are some minor checkStyle warnings.

 

./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/StorageSize.java:98:
 /**: First sentence should end with a period. [JavadocStyle] 
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/StorageSize.java:108:
 /**: First sentence should end with a period. [JavadocStyle] 
./hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerData.java:219:
 
StorageSize.getStorageSizeInGB(ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT));:
 Line is longer than 80 characters (found 88). [LineLength] 
./hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java:34:import
 org.apache.hadoop.ozone.OzoneConsts;:8: Unused import - 
org.apache.hadoop.ozone.OzoneConsts. [UnusedImports] 
./hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java:39:import
 org.apache.hadoop.ozone.OzoneConsts;:8: Unused import - 
org.apache.hadoop.ozone.OzoneConsts. [UnusedImports] 
./hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java:42:import
 org.apache.hadoop.ozone.OzoneConsts;:8: Unused import - 
org.apache.hadoop.ozone.OzoneConsts. [UnusedImports] 
./hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/closer/TestContainerCloser.java:75:
 size = 
StorageSize.getStorageSizeInByte(configuration.get(OZONE_SCM_CONTAINER_SIZE,: 
Line is longer than 80 characters (found 87). [LineLength]

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13851) Remove AlignmentContext from AbstractNNFailoverProxyProvider

2018-08-22 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13851:
---
Attachment: HDFS-13851-HDFS-12943.001.patch

> Remove AlignmentContext from AbstractNNFailoverProxyProvider
> 
>
> Key: HDFS-13851
> URL: https://issues.apache.org/jira/browse/HDFS-13851
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13851-HDFS-12943.001.patch
>
>
> {{AlignmentContext}} is now a part of {{ObserverReadProxyProvider}}, we can 
> remove it from the base class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-356:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~xyao], [~nandakumar131], [~jnp] Thanks for the reviews. I have committed this 
patch to trunk,

> Support ColumnFamily based RockDBStore and TableStore
> -
>
> Key: HDDS-356
> URL: https://issues.apache.org/jira/browse/HDDS-356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-356.001.patch, HDDS-356.002.patch, 
> HDDS-356.003.patch
>
>
> This is to minimize the performance impacts of the expensive RocksDB table 
> scan problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-368) all tests in TestOzoneRestClient failed due to "Unparseable date"

2018-08-22 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-368:
--
Description: 
OS: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-116-generic x86_64)

java version: 1.8.0_111

mvn: Apache Maven 3.3.9

Default locale: zh_CN, platform encoding: UTF-8

Test command: mvn test -Dtest=TestOzoneRestClient -Phdds

 
 All the tests in TestOzoneRestClient failed in my local machine with exception 
like below, does it mean anybody who have runtime environment like me can't run 
the Ozone Rest test now?
{noformat}
[ERROR] 
testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient) Time 
elapsed: 0.01 s <<< ERROR!
java.io.IOException: org.apache.hadoop.ozone.client.rest.OzoneException: 
Unparseable date: "m, 28 1970 19:23:50 GMT"
 at 
org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:853)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:252)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:210)
 at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
 at com.sun.proxy.$Proxy73.createVolume(Unknown Source)
 at org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:66)
 at 
org.apache.hadoop.ozone.client.rest.TestOzoneRestClient.testCreateBucket(TestOzoneRestClient.java:174)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
Caused by: org.apache.hadoop.ozone.client.rest.OzoneException: Unparseable 
date: "m, 28 1970 19:23:50 GMT"
at sun.reflect.GeneratedConstructorAccessor27.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
at 
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:270)
at 
com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:149)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
at 
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
at 
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
at 
org.apache.hadoop.ozone.client.rest.OzoneException.parse(OzoneException.java:265)
... 39 more
{noformat}
or like:
{noformat}
[ERROR] Failures:
[ERROR]   TestOzoneRestClient.testDeleteKey
Expected: exception with message a string containing "Lookup key failed, error"
 but: message was "Unexpected end-of-input within/between Object entries
 at [Source: (String)"{
  "owner" : {
"name" : "hadoop"
  },
  "quota" : {
"unit" : "TB",
"size" : 1048576
  },
  "volumeName" : "f93ed82d-dff6-4b75-a1c5-6a0fef5aa6dd",
  "createdOn" : "���, 06 ��� +50611 08:28:21 GMT",
  "createdBy" "; line: 11, column: 251]"
Stacktrace was: com.fasterxml.jackson.core.io.JsonEOFException: Unexpected 
end-of-input within/between Object entries
 at [Source: (String)"{
  "owner" : {
"name" : "hadoop"
  },
  "quota" : {
"unit" : "TB",
"size" : 1048576
  },
  "volumeName" : "f93ed82d-dff6-4b75-a1c5-6a0fef5aa6dd",
  "createdOn" : "���, 06 ��� +50611 08:28:21 GMT",
  "createdBy" "; line: 11, column: 251]
at 
com.fasterxml.jackson.core.base.ParserMinimalBase._reportInvalidEOF(ParserMinimalBase.java:588)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipColon2(ReaderBasedJsonParser.java:2214)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipColon(ReaderBasedJsonParser.java:2129)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextFieldName(ReaderBasedJsonParser.java:910)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:295)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
at 
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
at 
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
 

[jira] [Commented] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589564#comment-16589564
 ] 

genericqa commented on HDDS-356:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 27m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-356 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936719/HDDS-356.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 64b7fdff5b40 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / af4b705 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/814/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/814/testReport/ |
| Max. process+thread count | 395 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/814/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support ColumnFamily based RockDBStore 

[jira] [Updated] (HDDS-368) all tests in TestOzoneRestClient failed due to "Unparseable date"

2018-08-22 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-368:
--
Description: 
OS: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-116-generic x86_64)

java version: 1.8.0_111

mvn: Apache Maven 3.3.9

Default locale: zh_CN, platform encoding: UTF-8

Test command: mvn test -Dtest=TestOzoneRestClient -Phdds

 
All the tests in TestOzoneRestClient failed in my local machine with exception 
like:
{noformat}
[ERROR] 
testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient) Time 
elapsed: 0.01 s <<< ERROR!
java.io.IOException: org.apache.hadoop.ozone.client.rest.OzoneException: 
Unparseable date: "m, 28 1970 19:23:50 GMT"
 at 
org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:853)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:252)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:210)
 at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
 at com.sun.proxy.$Proxy73.createVolume(Unknown Source)
 at org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:66)
 at 
org.apache.hadoop.ozone.client.rest.TestOzoneRestClient.testCreateBucket(TestOzoneRestClient.java:174)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
Caused by: org.apache.hadoop.ozone.client.rest.OzoneException: Unparseable 
date: "m, 28 1970 19:23:50 GMT"
at sun.reflect.GeneratedConstructorAccessor27.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
at 
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:270)
at 
com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:149)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
at 
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
at 
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
at 
org.apache.hadoop.ozone.client.rest.OzoneException.parse(OzoneException.java:265)
... 39 more
{noformat}
or like:
{noformat}
[ERROR] Failures:
[ERROR]   TestOzoneRestClient.testDeleteKey
Expected: exception with message a string containing "Lookup key failed, error"
 but: message was "Unexpected end-of-input within/between Object entries
 at [Source: (String)"{
  "owner" : {
"name" : "hadoop"
  },
  "quota" : {
"unit" : "TB",
"size" : 1048576
  },
  "volumeName" : "f93ed82d-dff6-4b75-a1c5-6a0fef5aa6dd",
  "createdOn" : "���, 06 ��� +50611 08:28:21 GMT",
  "createdBy" "; line: 11, column: 251]"
Stacktrace was: com.fasterxml.jackson.core.io.JsonEOFException: Unexpected 
end-of-input within/between Object entries
 at [Source: (String)"{
  "owner" : {
"name" : "hadoop"
  },
  "quota" : {
"unit" : "TB",
"size" : 1048576
  },
  "volumeName" : "f93ed82d-dff6-4b75-a1c5-6a0fef5aa6dd",
  "createdOn" : "���, 06 ��� +50611 08:28:21 GMT",
  "createdBy" "; line: 11, column: 251]
at 
com.fasterxml.jackson.core.base.ParserMinimalBase._reportInvalidEOF(ParserMinimalBase.java:588)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipColon2(ReaderBasedJsonParser.java:2214)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipColon(ReaderBasedJsonParser.java:2129)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextFieldName(ReaderBasedJsonParser.java:910)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:295)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
at 
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
at 
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
at 
org.apache.hadoop.ozone.client.rest.response.VolumeInfo.parse(VolumeInfo.java:178)
at 

[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-08-22 Thread Rajesh Chandramohan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589557#comment-16589557
 ] 

Rajesh Chandramohan commented on HDFS-13596:


While  Upgrading from hadoop-2.7.3 to Hadoop-3.1.0

JournalNodes' s are all running with hadoop-3.X

 RollingUpgrade of one of the NN loaded fsimage and editLogs, it just stuck  on 
report Block , without any errors. All Datanodes are still at hadoop-2.7.  
block reporting doesn't progress anybody faced ?

++
{code:java}
The reported blocks 0 needs additional 204812  blocks to reach the threshold 
1. of total blocks 204812. The number of live datanodes 11 has reached the 
minimum number 0. {code}
++

 

DN Logs 
{code:java}
2018-08-22 16:11:44,748 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeCommand action : DNA_REGISTER from -nn.node.com./X.X:8030 with standby 
state
2018-08-22 16:11:44,759 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Reported NameNode version '3.1.0' does not match DataNode version '2.7.1-Prod' 
but is within acceptable limits. Note: This is normal during a rolling upgrade.
2018-08-22 16:11:44,759 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Block pool BP-1018191021-10.115.22.28-1436474857708 (Datanode Uuid 
3057a76f-b274-492c-a774-df767a260f09) service to nn.node.com/XX.XX:8030 
beginning handshake with NN
2018-08-22 16:11:44,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Block pool Block pool BP-1018191021-10.115.22.28-1436474857708 (Datanode Uuid 
3057a76f-b274-492c-a774-df767a260f09) service to nn.node.com/XX.XX:8030 
successfully registered with NN{code}

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Zsolt Venczel
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at 

[jira] [Created] (HDFS-13851) Remove AlignmentContext from AbstractNNFailoverProxyProvider

2018-08-22 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-13851:
--

 Summary: Remove AlignmentContext from 
AbstractNNFailoverProxyProvider
 Key: HDFS-13851
 URL: https://issues.apache.org/jira/browse/HDFS-13851
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-12943
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


{{AlignmentContext}} is now a part of {{ObserverReadProxyProvider}}, we can 
remove it from the base class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline

2018-08-22 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-227:
-
Status: Open  (was: Patch Available)

> Use Grpc as the default transport protocol for Standalone pipeline
> --
>
> Key: HDDS-227
> URL: https://issues.apache.org/jira/browse/HDDS-227
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-227.001.patch, HDDS-227.002.patch
>
>
> Using a config, Standalone pipeline can currently choose between Grpc and 
> Netty based transport protocol, this jira proposes to use only grpc as the 
> transport protocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline

2018-08-22 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-227:
-
Status: Patch Available  (was: Open)

> Use Grpc as the default transport protocol for Standalone pipeline
> --
>
> Key: HDDS-227
> URL: https://issues.apache.org/jira/browse/HDDS-227
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-227.001.patch, HDDS-227.002.patch
>
>
> Using a config, Standalone pipeline can currently choose between Grpc and 
> Netty based transport protocol, this jira proposes to use only grpc as the 
> transport protocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline

2018-08-22 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-227:
-
Attachment: HDDS-227.002.patch

> Use Grpc as the default transport protocol for Standalone pipeline
> --
>
> Key: HDDS-227
> URL: https://issues.apache.org/jira/browse/HDDS-227
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-227.001.patch, HDDS-227.002.patch
>
>
> Using a config, Standalone pipeline can currently choose between Grpc and 
> Netty based transport protocol, this jira proposes to use only grpc as the 
> transport protocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-368) all tests in TestOzoneRestClient failed due to "Unparseable date"

2018-08-22 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-368:
--
Affects Version/s: 0.2.1

> all tests in TestOzoneRestClient failed due to "Unparseable date"
> -
>
> Key: HDDS-368
> URL: https://issues.apache.org/jira/browse/HDDS-368
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: LiXin Ge
>Priority: Major
>
> OS: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-116-generic x86_64)
> java version: 1.8.0_111
> mvn: Apache Maven 3.3.9
> Default locale: zh_CN, platform encoding: UTF-8
> Test command: mvn test -Dtest=TestOzoneRestClient -Phdds
>  
> All the tests in TestOzoneRestClient failed in my local machine with 
> exception like:
> {noformat}
> [ERROR] 
> testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient) 
> Time elapsed: 0.01 s <<< ERROR!
> java.io.IOException: org.apache.hadoop.ozone.client.rest.OzoneException: 
> Unparseable date: "m, 28 1970 19:23:50 GMT"
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:853)
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:252)
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:210)
>  at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>  at com.sun.proxy.$Proxy73.createVolume(Unknown Source)
>  at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:66)
>  at 
> org.apache.hadoop.ozone.client.rest.TestOzoneRestClient.testCreateBucket(TestOzoneRestClient.java:174)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> Caused by: org.apache.hadoop.ozone.client.rest.OzoneException: Unparseable 
> date: "m, 28 1970 19:23:50 GMT"
> at sun.reflect.GeneratedConstructorAccessor27.newInstance(Unknown 
> Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
> at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:270)
> at 
> com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:149)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
> at 
> com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
> at 
> com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
> at 
> org.apache.hadoop.ozone.client.rest.OzoneException.parse(OzoneException.java:265)
> ... 39 more
> {noformat}
> I have tried to change the {{Locale}} of {{SimpleDateFormat}} to 
> {{Locale.CHINESE}} or {{Locale.SIMPLIFIED_CHINESE}}, but it didn't work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13849) Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client

2018-08-22 Thread Ian Pickering (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589529#comment-16589529
 ] 

Ian Pickering commented on HDFS-13849:
--

cleanupWithLogger is the version of the API that takes an SLF4J logger instead 
of a commons-logging logger.

> Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, 
> hadoop-hdfs-rbf, hadoop-hdfs-native-client
> ---
>
> Key: HDFS-13849
> URL: https://issues.apache.org/jira/browse/HDFS-13849
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ian Pickering
>Assignee: Ian Pickering
>Priority: Minor
> Attachments: HDFS-13849.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-22 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589522#comment-16589522
 ] 

Xiaoyu Yao commented on HDDS-356:
-

Thanks [~anu] for the update. +1 for v3 patch pending Jenkins.

> Support ColumnFamily based RockDBStore and TableStore
> -
>
> Key: HDDS-356
> URL: https://issues.apache.org/jira/browse/HDDS-356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-356.001.patch, HDDS-356.002.patch, 
> HDDS-356.003.patch
>
>
> This is to minimize the performance impacts of the expensive RocksDB table 
> scan problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13848) Refactor NameNode failover proxy providers

2018-08-22 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13848:
---
Status: Patch Available  (was: Open)

The patch 
* unifies and moves common logic from {{Configured-}} and 
{{IP-FailoverProxyProviders}} into {{AbstractNNFailoverProxyProvider}}
* removes inner class {{AddressRpcProxyPair}}, which was specific for CFPP 
and replaced it with {{ProxyInfo}}
* Added address field to {{ProxyInfo}}

> Refactor NameNode failover proxy providers
> --
>
> Key: HDFS-13848
> URL: https://issues.apache.org/jira/browse/HDFS-13848
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, hdfs-client
>Affects Versions: 2.7.5
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13848.patch
>
>
> Looking at NN failover proxy providers in the context of HDFS-13782 I noticed 
> that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have 
> a lot of common logic. We can move this common logic into 
> {{AbstractNNFailoverProxyProvider}}, which simplifies things a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13849) Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client

2018-08-22 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589517#comment-16589517
 ] 

Giovanni Matteo Fumarola commented on HDFS-13849:
-

Thanks [~iapicker] . 
Why did you change \{{cleanup}} with {{cleanupWithLogger}}?

> Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, 
> hadoop-hdfs-rbf, hadoop-hdfs-native-client
> ---
>
> Key: HDFS-13849
> URL: https://issues.apache.org/jira/browse/HDFS-13849
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ian Pickering
>Assignee: Ian Pickering
>Priority: Minor
> Attachments: HDFS-13849.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13848) Refactor NameNode failover proxy providers

2018-08-22 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13848:
---
Attachment: HDFS-13848.patch

> Refactor NameNode failover proxy providers
> --
>
> Key: HDFS-13848
> URL: https://issues.apache.org/jira/browse/HDFS-13848
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, hdfs-client
>Affects Versions: 2.7.5
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13848.patch
>
>
> Looking at NN failover proxy providers in the context of HDFS-13782 I noticed 
> that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have 
> a lot of common logic. We can move this common logic into 
> {{AbstractNNFailoverProxyProvider}}, which simplifies things a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13850) Migrate logging to slf4j in hadoop-hdfs-client

2018-08-22 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13850:
-
Attachment: HDFS-13850.v1.patch

> Migrate logging to slf4j in hadoop-hdfs-client
> --
>
> Key: HDFS-13850
> URL: https://issues.apache.org/jira/browse/HDFS-13850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ian Pickering
>Assignee: Ian Pickering
>Priority: Minor
> Attachments: HDFS-13850.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package

2018-08-22 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13695:
-
Attachment: HDFS-13695.v6.patch

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch, HDFS-13695.v2.patch, 
> HDFS-13695.v3.patch, HDFS-13695.v4.patch, HDFS-13695.v5.patch, 
> HDFS-13695.v6.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13850) Migrate logging to slf4j in hadoop-hdfs-client

2018-08-22 Thread Ian Pickering (JIRA)
Ian Pickering created HDFS-13850:


 Summary: Migrate logging to slf4j in hadoop-hdfs-client
 Key: HDFS-13850
 URL: https://issues.apache.org/jira/browse/HDFS-13850
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ian Pickering
Assignee: Ian Pickering






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13849) Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client

2018-08-22 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13849:
-
Attachment: HDFS-13849.v1.patch

> Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, 
> hadoop-hdfs-rbf, hadoop-hdfs-native-client
> ---
>
> Key: HDFS-13849
> URL: https://issues.apache.org/jira/browse/HDFS-13849
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ian Pickering
>Assignee: Ian Pickering
>Priority: Minor
> Attachments: HDFS-13849.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13849) Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client

2018-08-22 Thread Ian Pickering (JIRA)
Ian Pickering created HDFS-13849:


 Summary: Migrate logging to slf4j in hadoop-hdfs-httpfs, 
hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client
 Key: HDFS-13849
 URL: https://issues.apache.org/jira/browse/HDFS-13849
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ian Pickering
Assignee: Ian Pickering






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13848) Refactor NameNode failover proxy providers

2018-08-22 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-13848:
--

 Summary: Refactor NameNode failover proxy providers
 Key: HDFS-13848
 URL: https://issues.apache.org/jira/browse/HDFS-13848
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs-client
Affects Versions: 2.7.5
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


Looking at NN failover proxy providers in the context of HDFS-13782 I noticed 
that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have a 
lot of common logic. We can move this common logic into 
{{AbstractNNFailoverProxyProvider}}, which simplifies things a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-22 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589499#comment-16589499
 ] 

Íñigo Goiri commented on HDFS-13634:


+1 on  [^HDFS-13634.3.patch].

> RBF: Configurable value in xml for async connection request queue size.
> ---
>
> Key: HDFS-13634
> URL: https://issues.apache.org/jira/browse/HDFS-13634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13634.0.patch, HDFS-13634.1.patch, 
> HDFS-13634.2.patch, HDFS-13634.3.patch
>
>
> The below in ConnectionManager.java should be configurable via hdfs-site.xml. 
> This a very critical parameter for routers, admins would like to change this 
> without doing a new build.
> {code:java}
>   /** Number of parallel new connections to create. */
>   protected static final int MAX_NEW_CONNECTIONS = 100;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589486#comment-16589486
 ] 

genericqa commented on HDDS-325:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 45s{color} | {color:orange} root: The patch generated 4 new + 22 unchanged - 
2 fixed = 26 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} framework in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 55s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Commented] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589485#comment-16589485
 ] 

Anu Engineer commented on HDDS-356:
---

[~xyao] Thanks for the comments. Patch v3 addresses all the comments. Thanks 
for catching the statically loaded issue.

 

> Support ColumnFamily based RockDBStore and TableStore
> -
>
> Key: HDDS-356
> URL: https://issues.apache.org/jira/browse/HDDS-356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-356.001.patch, HDDS-356.002.patch, 
> HDDS-356.003.patch
>
>
> This is to minimize the performance impacts of the expensive RocksDB table 
> scan problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-356:
--
Attachment: HDDS-356.003.patch

> Support ColumnFamily based RockDBStore and TableStore
> -
>
> Key: HDDS-356
> URL: https://issues.apache.org/jira/browse/HDDS-356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-356.001.patch, HDDS-356.002.patch, 
> HDDS-356.003.patch
>
>
> This is to minimize the performance impacts of the expensive RocksDB table 
> scan problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-22 Thread Ekanth Sethuramalingam (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589448#comment-16589448
 ] 

Ekanth Sethuramalingam commented on HDFS-13634:
---

The new patch [^HDFS-13634.3.patch] looks good to me. +1.

> RBF: Configurable value in xml for async connection request queue size.
> ---
>
> Key: HDFS-13634
> URL: https://issues.apache.org/jira/browse/HDFS-13634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13634.0.patch, HDFS-13634.1.patch, 
> HDFS-13634.2.patch, HDFS-13634.3.patch
>
>
> The below in ConnectionManager.java should be configurable via hdfs-site.xml. 
> This a very critical parameter for routers, admins would like to change this 
> without doing a new build.
> {code:java}
>   /** Number of parallel new connections to create. */
>   protected static final int MAX_NEW_CONNECTIONS = 100;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589442#comment-16589442
 ] 

genericqa commented on HDFS-13634:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
20s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13634 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936701/HDFS-13634.3.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 86170f79ad44 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / af4b705 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24839/testReport/ |
| Max. process+thread count | 1025 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24839/console 

[jira] [Commented] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589431#comment-16589431
 ] 

genericqa commented on HDFS-11520:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 48s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static |
|   | test_libhdfs_threaded_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-11520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936697/HDFS-11520.007.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 927fe62647a6 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4c25f37 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24840/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24840/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24840/testReport/ |
| Max. process+thread count | 344 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24840/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>   

[jira] [Commented] (HDDS-342) Add example byteman script to print out hadoop rpc traffic

2018-08-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589409#comment-16589409
 ] 

Hudson commented on HDDS-342:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14816 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14816/])
HDDS-342. Add example byteman script to print out hadoop rpc traffic. 
(aengineer: rev af4b705b5f73b177be24292d8dda3a150aa12596)
* (add) dev-support/byteman/hadooprpc.btm
* (edit) hadoop-dist/src/main/compose/ozone/docker-config
* (add) dev-support/byteman/README.md


> Add example byteman script to print out hadoop rpc traffic
> --
>
> Key: HDDS-342
> URL: https://issues.apache.org/jira/browse/HDDS-342
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-342.001.patch, HDDS-342.002.patch, byteman.png, 
> byteman2.png
>
>
> HADOOP-15656 adds byteman support to the hadoop-runner base image. byteman is 
> a simple tool to define java instrumentation. For example it's very easy to 
> print out the incoming and outgoing hadoop rcp messages or fsimage edits.
> In this patch I add one more line to the standard docker-compose cluster to 
> demonstrate this capability (print out rpc calls). By default it's turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-22 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589397#comment-16589397
 ] 

Xiaoyu Yao commented on HDDS-356:
-

Thanks [~anu] for the update. The patch v2 looks excellent to me. Just few more 
comments:

 

*RDBStore.java*

Line 68: NIT: this is not needed as RocksDB has statically loaded the native 
library upon class load.

 

Line 107: NIT: we can use toIOException() here.

 

Line 142: NIT: local variable handles can be folded in the for loop.

 

 

*RDBTable.java*

Line 138: this iterate() method seems very similar to standard 
Iterator#forEachRemaining?

Can we move it to overwrite forEachRemaining in RDBStoreIterator?

 

> Support ColumnFamily based RockDBStore and TableStore
> -
>
> Key: HDDS-356
> URL: https://issues.apache.org/jira/browse/HDDS-356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-356.001.patch, HDDS-356.002.patch
>
>
> This is to minimize the performance impacts of the expensive RocksDB table 
> scan problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-342) Add example byteman script to print out hadoop rpc traffic

2018-08-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-342:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~elek] Thank you for the contribution. I have committed this to the trunk.

> Add example byteman script to print out hadoop rpc traffic
> --
>
> Key: HDDS-342
> URL: https://issues.apache.org/jira/browse/HDDS-342
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-342.001.patch, HDDS-342.002.patch, byteman.png, 
> byteman2.png
>
>
> HADOOP-15656 adds byteman support to the hadoop-runner base image. byteman is 
> a simple tool to define java instrumentation. For example it's very easy to 
> print out the incoming and outgoing hadoop rcp messages or fsimage edits.
> In this patch I add one more line to the standard docker-compose cluster to 
> demonstrate this capability (print out rpc calls). By default it's turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline

2018-08-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-227:
--
Fix Version/s: 0.2.1

> Use Grpc as the default transport protocol for Standalone pipeline
> --
>
> Key: HDDS-227
> URL: https://issues.apache.org/jira/browse/HDDS-227
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-227.001.patch
>
>
> Using a config, Standalone pipeline can currently choose between Grpc and 
> Netty based transport protocol, this jira proposes to use only grpc as the 
> transport protocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-22 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589372#comment-16589372
 ] 

Íñigo Goiri commented on HDFS-13634:


A few options for the 80 chars limit error:
# Change the name of the constant.
# Do an static import.
# Ignore the checkstyle.

3 is my least favorite but I can live with an extra checkstyle warning.



> RBF: Configurable value in xml for async connection request queue size.
> ---
>
> Key: HDFS-13634
> URL: https://issues.apache.org/jira/browse/HDFS-13634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13634.0.patch, HDFS-13634.1.patch, 
> HDFS-13634.2.patch, HDFS-13634.3.patch
>
>
> The below in ConnectionManager.java should be configurable via hdfs-site.xml. 
> This a very critical parameter for routers, admins would like to change this 
> without doing a new build.
> {code:java}
>   /** Number of parallel new connections to create. */
>   protected static final int MAX_NEW_CONNECTIONS = 100;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-22 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-13634:
---
Attachment: HDFS-13634.3.patch

> RBF: Configurable value in xml for async connection request queue size.
> ---
>
> Key: HDFS-13634
> URL: https://issues.apache.org/jira/browse/HDFS-13634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13634.0.patch, HDFS-13634.1.patch, 
> HDFS-13634.2.patch, HDFS-13634.3.patch
>
>
> The below in ConnectionManager.java should be configurable via hdfs-site.xml. 
> This a very critical parameter for routers, admins would like to change this 
> without doing a new build.
> {code:java}
>   /** Number of parallel new connections to create. */
>   protected static final int MAX_NEW_CONNECTIONS = 100;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-08-22 Thread Anatoli Shein (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11520:
-
Attachment: HDFS-11520.007.patch

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.004.patch, HDFS-11520.005.patch, HDFS-11520.007.patch, 
> HDFS-11520.HDFS-8707.000.patch, HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-08-22 Thread Anatoli Shein (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11520:
-
Attachment: (was: HDFS-11520.006.patch)

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.004.patch, HDFS-11520.005.patch, HDFS-11520.HDFS-8707.000.patch, 
> HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-08-22 Thread Anatoli Shein (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589297#comment-16589297
 ] 

Anatoli Shein commented on HDFS-11520:
--

Thanks for the review [~James C],

In the new (checkpoint) patch I fixed the outdated/confusing comments, added 
another place for cancellation (during OnSendCompleted), added functionality 
for adding multiple requests to the same cancel handle (for recursive stuff). 
Still working on adding more tests and support for cancellation during 
recursion/retry/failover. 

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.004.patch, HDFS-11520.005.patch, HDFS-11520.006.patch, 
> HDFS-11520.HDFS-8707.000.patch, HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-08-22 Thread Anatoli Shein (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11520:
-
Attachment: HDFS-11520.006.patch

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.004.patch, HDFS-11520.005.patch, HDFS-11520.006.patch, 
> HDFS-11520.HDFS-8707.000.patch, HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-22 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589222#comment-16589222
 ] 

Xiaoyu Yao edited comment on HDDS-317 at 8/22/18 6:26 PM:
--

Thanks [~junjie] for working on this. The patch v2 looks good to me. I just 
have few minor comments:

 

*Ozone-default.xml*

Line 618: some of the description below needs to be updated. 

E.g. remove "The value is specified in GB."

 

*StorageSize.java*

Line 98-117: can we use the existing conf.getStorageSize without introducing 
StorageSize._getStorageSizeInGB_  and StorageSize._getStorageSizeInByte? There 
is a overload method allowing specify target unit when reading the 
configuration. TestConfiguration#testStorageUnit has a bunch of examples that 
may simplify this patch._ 

 

For example in KeyValueHandler.java

Line 152:
{code:java}
maxContainerSizeGB = StorageSize.getStorageSizeInGB(

    config.get(ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE,

    ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT));

{code}
 

can be replaced with
{code:java}
 

Import static* org.apache.hadoop.conf.StorageUnit._GB_;

 

maxContainerSizeGB = config.getStorageSize(

    ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_,

    ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT, GB));

 

{code}


was (Author: xyao):
Thanks [~junjie] for working on this. The patch v2 looks good to me. I just 
have few minor comments:

 

*Ozone-default.xml*

Line 618: some of the description below needs to be updated. 

E.g. remove "The value is specified in GB."

 

*StorageSize.java*

Line 98-117: can we use the existing conf.getStorageSize without introducing 
StorageSize._getStorageSizeInGB_  and StorageSize._getStorageSizeInByte? There 
is a overload method allowing specify target unit when reading the 
configuration. TestConfiguration#testStorageUnit has a bunch of examples that 
may simplify this patch._ 

 

For example in KeyValueHandler.java

Line 152:

{code}

*maxContainerSizeGB* = StorageSize._getStorageSizeInGB_(

    config.get(ScmConfigKeys.*_OZONE_SCM_CONTAINER_SIZE_*,

    ScmConfigKeys.*_OZONE_SCM_CONTAINER_SIZE_DEFAULT_*));

{code}

 

can be replaced with

{code}

 

*Import static* org.apache.hadoop.conf.StorageUnit.*_GB_*;

 

*maxContainerSizeGB* = config._getStorageSize_(

    ScmConfigKeys.*_OZONE_SCM_CONTAINER_SIZE_*,

    ScmConfigKeys.*_OZONE_SCM_CONTAINER_SIZE_DEFAULT, GB_*));

 

{code}

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-22 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589222#comment-16589222
 ] 

Xiaoyu Yao edited comment on HDDS-317 at 8/22/18 6:26 PM:
--

Thanks [~junjie] for working on this. The patch v2 looks good to me. I just 
have few minor comments:

 

*Ozone-default.xml*

Line 618: some of the description below needs to be updated. 

E.g. remove "The value is specified in GB."

 

*StorageSize.java*

Line 98-117: can we use the existing conf.getStorageSize without introducing 
StorageSize._getStorageSizeInGB_  and StorageSize._getStorageSizeInByte? There 
is a overload method allowing specify target unit when reading the 
configuration. TestConfiguration#testStorageUnit has a bunch of examples that 
may simplify this patch._ 

 

For example in KeyValueHandler.java

Line 152:
{code:java}
maxContainerSizeGB = StorageSize.getStorageSizeInGB(

    config.get(ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE,

    ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT));

{code}
 

can be replaced with
{code:java}
 

Import static org.apache.hadoop.conf.StorageUnit._GB_;

 

maxContainerSizeGB = config.getStorageSize(

    ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_,

    ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT, GB));

 

{code}


was (Author: xyao):
Thanks [~junjie] for working on this. The patch v2 looks good to me. I just 
have few minor comments:

 

*Ozone-default.xml*

Line 618: some of the description below needs to be updated. 

E.g. remove "The value is specified in GB."

 

*StorageSize.java*

Line 98-117: can we use the existing conf.getStorageSize without introducing 
StorageSize._getStorageSizeInGB_  and StorageSize._getStorageSizeInByte? There 
is a overload method allowing specify target unit when reading the 
configuration. TestConfiguration#testStorageUnit has a bunch of examples that 
may simplify this patch._ 

 

For example in KeyValueHandler.java

Line 152:
{code:java}
maxContainerSizeGB = StorageSize.getStorageSizeInGB(

    config.get(ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE,

    ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT));

{code}
 

can be replaced with
{code:java}
 

Import static* org.apache.hadoop.conf.StorageUnit._GB_;

 

maxContainerSizeGB = config.getStorageSize(

    ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_,

    ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT, GB));

 

{code}

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-22 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589222#comment-16589222
 ] 

Xiaoyu Yao commented on HDDS-317:
-

Thanks [~junjie] for working on this. The patch v2 looks good to me. I just 
have few minor comments:

 

*Ozone-default.xml*

Line 618: some of the description below needs to be updated. 

E.g. remove "The value is specified in GB."

 

*StorageSize.java*

Line 98-117: can we use the existing conf.getStorageSize without introducing 
StorageSize._getStorageSizeInGB_  and StorageSize._getStorageSizeInByte? There 
is a overload method allowing specify target unit when reading the 
configuration. TestConfiguration#testStorageUnit has a bunch of examples that 
may simplify this patch._ 

 

For example in KeyValueHandler.java

Line 152:

{code}

*maxContainerSizeGB* = StorageSize._getStorageSizeInGB_(

    config.get(ScmConfigKeys.*_OZONE_SCM_CONTAINER_SIZE_*,

    ScmConfigKeys.*_OZONE_SCM_CONTAINER_SIZE_DEFAULT_*));

{code}

 

can be replaced with

{code}

 

*Import static* org.apache.hadoop.conf.StorageUnit.*_GB_*;

 

*maxContainerSizeGB* = config._getStorageSize_(

    ScmConfigKeys.*_OZONE_SCM_CONTAINER_SIZE_*,

    ScmConfigKeys.*_OZONE_SCM_CONTAINER_SIZE_DEFAULT, GB_*));

 

{code}

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-08-22 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-370:

Description: 
Add and implement following functions in SCMClientProtocolServer
# isScmInChillMode
# forceScmEnterChillMode
# forceScmExitChillMode

  was:Modify functions impacted by SCM chill mode in 
StorageContainerLocationProtocol.


> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Priority: Major
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmEnterChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-08-22 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDDS-370:
---

Assignee: Ajay Kumar

> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmEnterChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-350) ContainerMapping#flushContainerInfo doesn't set containerId

2018-08-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589221#comment-16589221
 ] 

Hudson commented on HDDS-350:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14815 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14815/])
HDDS-350. ContainerMapping#flushContainerInfo doesn't set containerId. (xyao: 
rev 4c25f37c6cc4e22a006cd095d6143b549bf4a0a8)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerMapping.java


> ContainerMapping#flushContainerInfo doesn't set containerId
> ---
>
> Key: HDDS-350
> URL: https://issues.apache.org/jira/browse/HDDS-350
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-350.00.patch
>
>
> ContainerMapping#flushContainerInfo doesn't set containerId which results in 
> containerId being null in flushed containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-08-22 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-370:
---

 Summary: Add and implement following functions in 
SCMClientProtocolServer
 Key: HDDS-370
 URL: https://issues.apache.org/jira/browse/HDDS-370
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar


Modify functions impacted by SCM chill mode in StorageContainerLocationProtocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589218#comment-16589218
 ] 

genericqa commented on HDFS-13634:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
14s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13634 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936680/HDFS-13634.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 47564a84b0db 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5aa15cf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24838/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24838/testReport/ |

[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-22 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589207#comment-16589207
 ] 

Lokesh Jain commented on HDDS-325:
--

[~elek] I have uploaded rebased v5 patch.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-08-22 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.005.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589206#comment-16589206
 ] 

genericqa commented on HDDS-369:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 41m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdds/server-scm: The patch generated 6 
new + 0 unchanged - 0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 27s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.node.TestDeadNodeHandler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936675/HDDS-369.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ad7372f71117 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8184739 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/811/artifact/out/diff-checkstyle-hadoop-hdds_server-scm.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/811/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/811/testReport/ |
| Max. process+thread count | 324 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| 

[jira] [Commented] (HDDS-363) Faster datanode registration during the first startup

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589196#comment-16589196
 ] 

genericqa commented on HDDS-363:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-363 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936676/HDDS-363.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6462af164138 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8184739 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/812/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: 

[jira] [Updated] (HDDS-350) ContainerMapping#flushContainerInfo doesn't set containerId

2018-08-22 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-350:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution. I've committed the patch to trunk. 

> ContainerMapping#flushContainerInfo doesn't set containerId
> ---
>
> Key: HDDS-350
> URL: https://issues.apache.org/jira/browse/HDDS-350
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-350.00.patch
>
>
> ContainerMapping#flushContainerInfo doesn't set containerId which results in 
> containerId being null in flushed containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-328) Support export and import of the KeyValueContainer

2018-08-22 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589170#comment-16589170
 ] 

Xiaoyu Yao commented on HDDS-328:
-

[~elek], thanks for the update and investigation.

The findbugsExclude change can be avoided with the following changes:

{code}

Path parent = Preconditions.checkNotNull(path.getParent(),
    "Path element should have a parent directory");
Files.createDirectories(parent);

{code}

> Support export and import of the KeyValueContainer
> --
>
> Key: HDDS-328
> URL: https://issues.apache.org/jira/browse/HDDS-328
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-328-HDFS-10285.006.patch, HDDS-328.002.patch, 
> HDDS-328.003.patch, HDDS-328.004.patch, HDDS-328.005.patch, 
> HDDS-328.007.patch, HDDS-328.008.patch, HDDS-328.009.patch
>
>
> In HDDS-75 we pack the container data to an archive file, copy to other 
> datanodes and create the container from the archive.
> As I wrote in the comment of HDDS-75 I propose to separate the patch to make 
> it easier to review.
> In this patch we need to extend the existing Container interface with adding 
> export/import methods to save the container data to one binary input/output 
> stream. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-265) Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to KeyValueContainerData

2018-08-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589153#comment-16589153
 ] 

Hudson commented on HDDS-265:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14814 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14814/])
HDDS-265. Move numPendingDeletionBlocks and deleteTransactionId from 
(hanishakoneru: rev 5aa15cfaffbf294b5025989c20d905b01da52c2b)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/KeyValueContainerReport.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestKeyValueContainerData.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerReport.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerData.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/RandomContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/TopNOrderedContainerDeletionChoosingPolicy.java


> Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to 
> KeyValueContainerData
> -
>
> Key: HDDS-265
> URL: https://issues.apache.org/jira/browse/HDDS-265
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-265.000.patch, HDDS-265.001.patch, 
> HDDS-265.002.patch, HDDS-265.003.patch, HDDS-265.004.patch, HDDS-265.005.patch
>
>
> "numPendingDeletionBlocks" and "deleteTransactionId" fields are specific to 
> KeyValueContainers. As such they should be moved to KeyValueContainerData 
> from ContainerData.
> ContainerReport should also be refactored to take in this change. 
> Please refer to [~ljain]'s comment in HDDS-250.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-22 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-13634:
---
Attachment: HDFS-13634.2.patch

> RBF: Configurable value in xml for async connection request queue size.
> ---
>
> Key: HDFS-13634
> URL: https://issues.apache.org/jira/browse/HDFS-13634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13634.0.patch, HDFS-13634.1.patch, 
> HDFS-13634.2.patch
>
>
> The below in ConnectionManager.java should be configurable via hdfs-site.xml. 
> This a very critical parameter for routers, admins would like to change this 
> without doing a new build.
> {code:java}
>   /** Number of parallel new connections to create. */
>   protected static final int MAX_NEW_CONNECTIONS = 100;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-265) Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to KeyValueContainerData

2018-08-22 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-265:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to 
> KeyValueContainerData
> -
>
> Key: HDDS-265
> URL: https://issues.apache.org/jira/browse/HDDS-265
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-265.000.patch, HDDS-265.001.patch, 
> HDDS-265.002.patch, HDDS-265.003.patch, HDDS-265.004.patch, HDDS-265.005.patch
>
>
> "numPendingDeletionBlocks" and "deleteTransactionId" fields are specific to 
> KeyValueContainers. As such they should be moved to KeyValueContainerData 
> from ContainerData.
> ContainerReport should also be refactored to take in this change. 
> Please refer to [~ljain]'s comment in HDDS-250.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-265) Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to KeyValueContainerData

2018-08-22 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589121#comment-16589121
 ] 

Hanisha Koneru commented on HDDS-265:
-

Thank you [~GeLiXin] for working on this and [~ljain] for the reviews.

I have committed this to trunk.

> Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to 
> KeyValueContainerData
> -
>
> Key: HDDS-265
> URL: https://issues.apache.org/jira/browse/HDDS-265
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-265.000.patch, HDDS-265.001.patch, 
> HDDS-265.002.patch, HDDS-265.003.patch, HDDS-265.004.patch, HDDS-265.005.patch
>
>
> "numPendingDeletionBlocks" and "deleteTransactionId" fields are specific to 
> KeyValueContainers. As such they should be moved to KeyValueContainerData 
> from ContainerData.
> ContainerReport should also be refactored to take in this change. 
> Please refer to [~ljain]'s comment in HDDS-250.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-364) Update open container replica information in SCM during DN register

2018-08-22 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589114#comment-16589114
 ] 

Ajay Kumar edited comment on HDDS-364 at 8/22/18 5:03 PM:
--

[~elek] thanks for review. Please see my response inline:

{quote}What do you think about Node2ContainerMap? Do we need to update it as 
well?{quote}
I think we can get rid of processReport by sending missing and new containers 
(deltas) directly form DN. This will reduce the SCM-DN communication and will 
make DN responsible for sending only changed state. But this can be done in 
separate jira. 
{quote}2. As I see all the containers will be persisted twice. (First, they 
will be imported, after that they will be reconciled.). Don't think it's a big 
problem. IMHO later we need to cleanup all the processing path anyway.
One option may be just saving all the initial data to the state map without 
processing the reports (checking the required closing state, etc.). The 
downside here is some action would be delayed until the first real container 
report.{quote}
Not sure which codepath you are referring here. This patch adds containers 
reported in register call to replicaMap which is in memory. Everything else 
remains same. 

{quote} The import part (in case of isRegisterCall=true) is the first part of 
processContainerReport method. I think it would be very easy to move to a 
separated method and call it independently from 
SCMDatanodeProtocolServer.register method. Could be more simple, and maybe it 
could be easier to test. Currently (as I understood) there is no specific test 
to test the isRegisterCall=true path. But this is not a blocking problem. 
Depends from your consideration{quote}
Approach you are suggesting is close to the attached first patch. Had a 
discussion regarding this with [~xyao]. Moving it to processContainerReport is 
a small optimization to not iterate through whole list. (For a DN with 24 disks 
of 12 TB each we can have roughly 57600 containers of 5Gb) iterating through it 
and adding it to replicaMap should be quick but a large cluster with large no 
of DN may overwhelm the SCM during initial registration process. Moving this 
inside processContainerReport results in only one iteration of that list. At 
some point we can refactor this along with logic in ContainerCommandHandler and 
Node2ContainerMap#processReport.
Updated test in TestContainerMapping checks call to processContainerReport with 
isRegisterCall=true.


was (Author: ajayydv):
[~elek] thanks for review. Please see my response inline:

{quote}What do you think about Node2ContainerMap? Do we need to update it as 
well?{quote}
I think we can get rid of processReport by sending missing and new containers 
(deltas) directly form DN. This will reduce the SCM-DN communication and will 
make DN responsible for sending only changed state. But this can be done in 
separate jira. 
{quote}2. As I see all the containers will be persisted twice. (First, they 
will be imported, after that they will be reconciled.). Don't think it's a big 
problem. IMHO later we need to cleanup all the processing path anyway.
One option may be just saving all the initial data to the state map without 
processing the reports (checking the required closing state, etc.). The 
downside here is some action would be delayed until the first real container 
report.{quote}
Not sure which codepath you are referring here. This patch adds containers 
reported in register call to replicaMap which is in memory. Everything else 
remains same. 

{quote} The import part (in case of isRegisterCall=true) is the first part of 
processContainerReport method. I think it would be very easy to move to a 
separated method and call it independently from 
SCMDatanodeProtocolServer.register method. Could be more simple, and maybe it 
could be easier to test. Currently (as I understood) there is no specific test 
to test the isRegisterCall=true path. But this is not a blocking problem. 
Depends from your consideration{quote}
Approach you are suggesting is close to the attached first patch. Had a 
discussion regarding this with [~xyao]. Moving it to processContainerReport is 
a small optimization to not iterate through whole list. (For a DN with 24 disks 
of 12 TB each we can have roughly 57600 containers of 5Gb) iterating through it 
and adding it to replicaMap should be quick but a large cluster with large no 
of DN may overwhelm the SCM during initial registration process. Moving this 
inside processContainerReport results in only one iteration of that list. At 
some point we can refactor this along with logic in ContainerCommandHandler and 
Node2ContainerMap#processReport.

> Update open container replica information in SCM during DN register
> ---
>
> Key: HDDS-364
> URL: 

[jira] [Comment Edited] (HDDS-364) Update open container replica information in SCM during DN register

2018-08-22 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589114#comment-16589114
 ] 

Ajay Kumar edited comment on HDDS-364 at 8/22/18 4:56 PM:
--

[~elek] thanks for review. Please see my response inline:

{quote}What do you think about Node2ContainerMap? Do we need to update it as 
well?{quote}
I think we can get rid of processReport by sending missing and new containers 
(deltas) directly form DN. This will reduce the SCM-DN communication and will 
make DN responsible for sending only changed state. But this can be done in 
separate jira. 
{quote}2. As I see all the containers will be persisted twice. (First, they 
will be imported, after that they will be reconciled.). Don't think it's a big 
problem. IMHO later we need to cleanup all the processing path anyway.
One option may be just saving all the initial data to the state map without 
processing the reports (checking the required closing state, etc.). The 
downside here is some action would be delayed until the first real container 
report.{quote}
Not sure which codepath you are referring here. This patch adds containers 
reported in register call to replicaMap which is in memory. Everything else 
remains same. 

{quote} The import part (in case of isRegisterCall=true) is the first part of 
processContainerReport method. I think it would be very easy to move to a 
separated method and call it independently from 
SCMDatanodeProtocolServer.register method. Could be more simple, and maybe it 
could be easier to test. Currently (as I understood) there is no specific test 
to test the isRegisterCall=true path. But this is not a blocking problem. 
Depends from your consideration{quote}
Approach you are suggesting is close to the attached first patch. Had a 
discussion regarding this with [~xyao]. Moving it to processContainerReport is 
a small optimization to not iterate through whole list. (For a DN with 24 disks 
of 12 TB each we can have roughly 57600 containers of 5Gb) iterating through it 
and adding it to replicaMap should be quick but a large cluster with large no 
of DN may overwhelm the SCM during initial registration process. Moving this 
inside processContainerReport results in only one iteration of that list. At 
some point we can refactor this along with logic in ContainerCommandHandler and 
Node2ContainerMap#processReport.


was (Author: ajayydv):
[~elek] thanks for review. Please see my response inline:

{quote}What do you think about Node2ContainerMap? Do we need to update it as 
well?{quote}
I think we can get rid of processReport by sending missing and new containers 
(deltas) directly form DN. This will reduce the SCM-DN communication and will 
make DN responsible for sending only changed state. But this can be done in 
separate jira. 
{quote}2. As I see all the containers will be persisted twice. (First, they 
will be imported, after that they will be reconciled.). Don't think it's a big 
problem. IMHO later we need to cleanup all the processing path anyway.
One option may be just saving all the initial data to the state map without 
processing the reports (checking the required closing state, etc.). The 
downside here is some action would be delayed until the first real container 
report.{quote}
Not sure which codepath you are referring here. This patch adds containers 
reported in register call to replicaMap which is in memory. Everything else 
remains same. 

{quote} The import part (in case of isRegisterCall=true) is the first part of 
processContainerReport method. I think it would be very easy to move to a 
separated method and call it independently from 
SCMDatanodeProtocolServer.register method. Could be more simple, and maybe it 
could be easier to test. Currently (as I understood) there is no specific test 
to test the isRegisterCall=true path. But this is not a blocking problem. 
Depends from your consideration{quote}
Approach you are suggesting is close to the attached first patch. Had a 
discussion regarding this with [~xyao]. Moving it to processContainerReport is 
a small optimization to not iterate through whole list. (For a DN with 24 disks 
of 12 TB each we can have roughly 57600 of 5Gb) iterating through it and adding 
it to replicaMap should be quick but a large cluster with large no of DN may 
overwhelm the SCM during initial registration process. Moving this inside 
processContainerReport results in only one iteration of that list. At some 
point we can refactor this along with logic in ContainerCommandHandler and 
Node2ContainerMap#processReport.

> Update open container replica information in SCM during DN register
> ---
>
> Key: HDDS-364
> URL: https://issues.apache.org/jira/browse/HDDS-364
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>

[jira] [Commented] (HDDS-364) Update open container replica information in SCM during DN register

2018-08-22 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589114#comment-16589114
 ] 

Ajay Kumar commented on HDDS-364:
-

[~elek] thanks for review. Please see my response inline:

{quote}What do you think about Node2ContainerMap? Do we need to update it as 
well?{quote}
I think we can get rid of processReport by sending missing and new containers 
(deltas) directly form DN. This will reduce the SCM-DN communication and will 
make DN responsible for sending only changed state. But this can be done in 
separate jira. 
{quote}2. As I see all the containers will be persisted twice. (First, they 
will be imported, after that they will be reconciled.). Don't think it's a big 
problem. IMHO later we need to cleanup all the processing path anyway.
One option may be just saving all the initial data to the state map without 
processing the reports (checking the required closing state, etc.). The 
downside here is some action would be delayed until the first real container 
report.{quote}
Not sure which codepath you are referring here. This patch adds containers 
reported in register call to replicaMap which is in memory. Everything else 
remains same. 

{quote} The import part (in case of isRegisterCall=true) is the first part of 
processContainerReport method. I think it would be very easy to move to a 
separated method and call it independently from 
SCMDatanodeProtocolServer.register method. Could be more simple, and maybe it 
could be easier to test. Currently (as I understood) there is no specific test 
to test the isRegisterCall=true path. But this is not a blocking problem. 
Depends from your consideration{quote}
Approach you are suggesting is close to the attached first patch. Had a 
discussion regarding this with [~xyao]. Moving it to processContainerReport is 
a small optimization to not iterate through whole list. (For a DN with 24 disks 
of 12 TB each we can have roughly 57600 of 5Gb) iterating through it and adding 
it to replicaMap should be quick but a large cluster with large no of DN may 
overwhelm the SCM during initial registration process. Moving this inside 
processContainerReport results in only one iteration of that list. At some 
point we can refactor this along with logic in ContainerCommandHandler and 
Node2ContainerMap#processReport.

> Update open container replica information in SCM during DN register
> ---
>
> Key: HDDS-364
> URL: https://issues.apache.org/jira/browse/HDDS-364
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-364.00.patch, HDDS-364.01.patch
>
>
> Update open container replica information in SCM during DN register.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-363) Faster datanode registration during the first startup

2018-08-22 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-363:
--
Attachment: HDDS-363.002.patch

> Faster datanode registration during the first startup
> -
>
> Key: HDDS-363
> URL: https://issues.apache.org/jira/browse/HDDS-363
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-363.001.patch, HDDS-363.002.patch
>
>
> During the first startup usually we need to wait about 30 s to find the scm 
> usable. The datanode registration is a multiple step process 
> (request/response + request/response) and we need to wait the next HB to 
> finish the registration.
> I propose to use a more higher HB frequency at startup (let's say 2 seconds) 
> and set the configured HB only at the end of the registration.
> It also helps for the first users as it could be less confusing (the datanode 
> can be seen almost immediately on the UI)
> Also it would help a lot for me during the testing (yes, I can decrease the 
> HB frequency but in that case it's harder the follow the later HBs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-363) Faster datanode registration during the first startup

2018-08-22 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589112#comment-16589112
 ] 

Elek, Marton commented on HDDS-363:
---

Acceptance tests are passed:

{code}
==
Acceptance
==
Acceptance.Basic  
==
Acceptance.Basic.Basic :: Smoketest ozone cluster startup 
==
Test rest interface   | PASS |
--
Check webui static resources  | PASS |
--
Start freon testing   | PASS |
--
Acceptance.Basic.Basic :: Smoketest ozone cluster startup | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage
==
RestClient without http port  | PASS |
--
RestClient with http port | PASS |
--
RestClient without host name  | PASS |
--
RpcClient with port   | PASS |
--
RpcClient without host| PASS |
--
RpcClient without scheme  | PASS |
--
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage| PASS |
6 critical tests, 6 passed, 0 failed
6 tests total, 6 passed, 0 failed
==
Acceptance.Basic  | PASS |
9 critical tests, 9 passed, 0 failed
9 tests total, 9 passed, 0 failed
==
Acceptance.Ozonefs
==
Acceptance.Ozonefs.Ozonefs :: Ozonefs test
==
Create volume and bucket  | PASS |
--
Check volume from ozonefs | PASS |
--
Create directory from ozonefs | PASS |
--
Acceptance.Ozonefs.Ozonefs :: Ozonefs test| PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Ozonefs.Ozonesinglenode :: Ozonefs Single Node Test
==
Create volume and bucket  | PASS |
--
Check volume from ozonefs | PASS |
--
Create directory from ozonefs | PASS |
--
Test key handling | PASS |
--
Acceptance.Ozonefs.Ozonesinglenode :: Ozonefs Single Node Test| PASS |
4 critical tests, 4 

[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-22 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589109#comment-16589109
 ] 

Elek, Marton commented on HDDS-325:
---

Thanks [~ljain] the update. Unfortunately he patch doesn't apply any more:

{code}
Attachments:
Downloading HDDS-325.004.patch
exit status 1
error: patch failed: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java:67
error: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java:
 patch does not apply
error: patch failed: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java:60
error: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java:
 patch does not apply
{code}

Would you be so kind to rebase it?

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-22 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589108#comment-16589108
 ] 

Chao Sun commented on HDFS-13790:
-

bq. are you planning update patches for branch-3.1 to branch-2.9..?

Oops I forgot. Will do.
 

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13790.000.patch, HDFS-13790.001.patch
>
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-369) Remove the containers of a dead node from the container state map

2018-08-22 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-369:
-

Assignee: Elek, Marton

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-369) Remove the containers of a dead node from the container state map

2018-08-22 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-369:
--
Attachment: HDDS-369.001.patch

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-369) Remove the containers of a dead node from the container state map

2018-08-22 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-369:
--
Status: Patch Available  (was: Open)

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-363) Faster datanode registration during the first startup

2018-08-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589106#comment-16589106
 ] 

genericqa commented on HDDS-363:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 4 
unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-363 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936668/HDDS-363.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c30ef0b02427 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8184739 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| javadoc | 

[jira] [Created] (HDDS-369) Remove the containers of a dead node from the container state map

2018-08-22 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-369:
-

 Summary: Remove the containers of a dead node from the container 
state map
 Key: HDDS-369
 URL: https://issues.apache.org/jira/browse/HDDS-369
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Elek, Marton
 Fix For: 0.2.1


In case of a node is dead we need to update the container replicas information 
of the containerStateMap for all the containers from that specific node.

With removing the replica information we can detect the under replicated state 
and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13847) Clean up ErasureCodingPolicyManager

2018-08-22 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589101#comment-16589101
 ] 

Xiao Chen commented on HDFS-13847:
--

As a side note, also found the {{clear}} method had some comments from 
HDFS-11633, more details 
[here|https://issues.apache.org/jira/browse/HDFS-11633?focusedCommentId=15961250=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15961250].

Now that HDFS-7337 is done, we can finish the TODO in another jira.

> Clean up ErasureCodingPolicyManager
> ---
>
> Key: HDFS-13847
> URL: https://issues.apache.org/jira/browse/HDFS-13847
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Priority: Major
>
> {{ErasureCodingPolicyManager}} class is declared as LimitedPrivate for HDFS.
> This doesn't seem to make sense, as I have checked all its usages are 
> strictly within hadoop-hdfs project.
> According to our [compat 
> guide|http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-common/Compatibility.html]:
> {quote}
> Within a component Hadoop developers are free to use Private and Limited 
> Private APIs,
> {quote}
> We should tune this down to just Private.
> This is identified because an internal testing marked HDFS-13772 as 
> incompatible, due to the method signature changes on the ECPM class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13847) Clean up ErasureCodingPolicyManager

2018-08-22 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13847:


 Summary: Clean up ErasureCodingPolicyManager
 Key: HDFS-13847
 URL: https://issues.apache.org/jira/browse/HDFS-13847
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.0.0
Reporter: Xiao Chen


{{ErasureCodingPolicyManager}} class is declared as LimitedPrivate for HDFS.

This doesn't seem to make sense, as I have checked all its usages are strictly 
within hadoop-hdfs project.
According to our [compat 
guide|http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-common/Compatibility.html]:
{quote}
Within a component Hadoop developers are free to use Private and Limited 
Private APIs,
{quote}

We should tune this down to just Private.

This is identified because an internal testing marked HDFS-13772 as 
incompatible, due to the method signature changes on the ECPM class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13846) Safe blocks counter is not decremented correctly if the block is striped

2018-08-22 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13846:

Attachment: HDFS-13846.001.patch

> Safe blocks counter is not decremented correctly if the block is striped
> 
>
> Key: HDFS-13846
> URL: https://issues.apache.org/jira/browse/HDFS-13846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13846.001.patch
>
>
> In BlockManagerSafeMode class, the "safe blocks" counter is incremented if 
> the number of nodes containing the block equals to the number of data units 
> specified by the erasure coding policy, which looks like this in the code:
> {code:java}
> final int safe = storedBlock.isStriped() ?
> ((BlockInfoStriped)storedBlock).getRealDataBlockNum() : 
> safeReplication;
> if (storageNum == safe) {
>   this.blockSafe++;
> {code}
> But when it is decremented the code does not check if the block is striped or 
> not, just compares the number of nodes containing the block with 0 
> (safeReplication - 1) if the block is complete, which is not correct.
> {code:java}
> if (storedBlock.isComplete() &&
> blockManager.countNodes(b).liveReplicas() == safeReplication - 1) {
>   this.blockSafe--;
>   assert blockSafe >= 0;
>   checkSafeMode();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13846) Safe blocks counter is not decremented correctly if the block is striped

2018-08-22 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13846:

Status: Patch Available  (was: Open)

> Safe blocks counter is not decremented correctly if the block is striped
> 
>
> Key: HDFS-13846
> URL: https://issues.apache.org/jira/browse/HDFS-13846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13846.001.patch
>
>
> In BlockManagerSafeMode class, the "safe blocks" counter is incremented if 
> the number of nodes containing the block equals to the number of data units 
> specified by the erasure coding policy, which looks like this in the code:
> {code:java}
> final int safe = storedBlock.isStriped() ?
> ((BlockInfoStriped)storedBlock).getRealDataBlockNum() : 
> safeReplication;
> if (storageNum == safe) {
>   this.blockSafe++;
> {code}
> But when it is decremented the code does not check if the block is striped or 
> not, just compares the number of nodes containing the block with 0 
> (safeReplication - 1) if the block is complete, which is not correct.
> {code:java}
> if (storedBlock.isComplete() &&
> blockManager.countNodes(b).liveReplicas() == safeReplication - 1) {
>   this.blockSafe--;
>   assert blockSafe >= 0;
>   checkSafeMode();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >