[jira] [Work logged] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?focusedWorklogId=243115=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243115
 ]

ASF GitHub Bot logged work on HDDS-1527:


Author: ASF GitHub Bot
Created on: 16/May/19 04:47
Start Date: 16/May/19 04:47
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #822: HDDS-1527. HDDS 
Datanode start fails due to datanode.id file read error
URL: https://github.com/apache/hadoop/pull/822#discussion_r284537726
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
 ##
 @@ -221,8 +226,16 @@ public synchronized static DatanodeDetails 
readDatanodeDetailsFrom(File path)
 try {
   return DatanodeIdYaml.readDatanodeIdFile(path);
 } catch (IOException e) {
-  throw new IOException("Failed to parse DatanodeDetails from "
-  + path.getAbsolutePath(), e);
+  LOG.warn("Error loading DatanodeDetails yaml from " +
+  path.getAbsolutePath(), e);
+  // Try to load as protobuf before giving up
+  try (FileInputStream in = new FileInputStream(path)) {
 
 Review comment:
   Thanks for the review @xiaoyuyao, will gladly add the test.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243115)
Time Spent: 50m  (was: 40m)

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=243107=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243107
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 16/May/19 04:21
Start Date: 16/May/19 04:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-492910899
 
 
   > Seems like cache.put is never called except testcases, cache will always 
be empty.
   
   This jira add's Table Cache, this is not integrated to OM Code, in further 
jira's it will be integrated, and we shall call this API put.
   
   Jira description also has the same information.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243107)
Time Spent: 9h  (was: 8h 50m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.

2019-05-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840969#comment-16840969
 ] 

Hadoop QA commented on HDFS-14353:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  7m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14353 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968854/HDFS-14353.008.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 65127787ad26 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d4c8858 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26795/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26795/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test 

[jira] [Work logged] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?focusedWorklogId=243105=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243105
 ]

ASF GitHub Bot logged work on HDDS-1527:


Author: ASF GitHub Bot
Created on: 16/May/19 04:02
Start Date: 16/May/19 04:02
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #822: HDDS-1527. 
HDDS Datanode start fails due to datanode.id file read error
URL: https://github.com/apache/hadoop/pull/822#discussion_r284531715
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
 ##
 @@ -221,8 +226,16 @@ public synchronized static DatanodeDetails 
readDatanodeDetailsFrom(File path)
 try {
   return DatanodeIdYaml.readDatanodeIdFile(path);
 } catch (IOException e) {
-  throw new IOException("Failed to parse DatanodeDetails from "
-  + path.getAbsolutePath(), e);
+  LOG.warn("Error loading DatanodeDetails yaml from " +
+  path.getAbsolutePath(), e);
+  // Try to load as protobuf before giving up
+  try (FileInputStream in = new FileInputStream(path)) {
 
 Review comment:
   @swagle , change looks good to me. Can you add a unit test to valid the 
upgrade case where the old protobuf is successfully read and rewrite into yaml 
format. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243105)
Time Spent: 40m  (was: 0.5h)

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=243100=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243100
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 16/May/19 03:51
Start Date: 16/May/19 03:51
Worklog Time Spent: 10m 
  Work Description: chinphing commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-492905610
 
 
   Seems like cache.put is never called except testcases, cache will always be 
empty.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243100)
Time Spent: 8h 50m  (was: 8h 40m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=243095=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243095
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 16/May/19 03:47
Start Date: 16/May/19 03:47
Worklog Time Spent: 10m 
  Work Description: chinphing commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-492905610
 
 
   Seems like addCacheEntry is never called, cache will always be empty.
   `  
   @Override
 public void addCacheEntry(CacheKey cacheKey,
 CacheValue cacheValue) {
   // This will override the entry if there is already entry for this key.
   cache.put(cacheKey, cacheValue);
 }
   `
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243095)
Time Spent: 8.5h  (was: 8h 20m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=243097=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243097
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 16/May/19 03:47
Start Date: 16/May/19 03:47
Worklog Time Spent: 10m 
  Work Description: chinphing commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-492905610
 
 
   Seems like addCacheEntry is never called, cache will always be empty.
   `@Override
 public void addCacheEntry(CacheKey cacheKey,
 CacheValue cacheValue) {
   // This will override the entry if there is already entry for this key.
   cache.put(cacheKey, cacheValue);
 }`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243097)
Time Spent: 8h 40m  (was: 8.5h)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1522) Provide intellij runConfiguration for Ozone components

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1522?focusedWorklogId=243094=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243094
 ]

ASF GitHub Bot logged work on HDDS-1522:


Author: ASF GitHub Bot
Created on: 16/May/19 03:46
Start Date: 16/May/19 03:46
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #815: HDDS-1522. 
Provide intellij runConfiguration for Ozone components
URL: https://github.com/apache/hadoop/pull/815#discussion_r284529730
 
 

 ##
 File path: hadoop-ozone/dev-support/intellij/install-runconfigs.sh
 ##
 @@ -0,0 +1,20 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+SRC_DIR="$SCRIPT_DIR/runConfigurations"
+DEST_DIR="$SCRIPT_DIR/../../../.idea/runConfigurations/"
+#shellcheck disable=SC2010
+ls -1 "$SRC_DIR" | grep -v ozone-site.xml | xargs -n1 -I FILE cp 
"$SRC_DIR/FILE" "$DEST_DIR"
 
 Review comment:
   Yes, mkdir -p solves the script problem. However, it is not clear how to 
import these xml files into intelliJ. Can you provide some pointers on how to 
import these xml files and run them inside IntelliJ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243094)
Time Spent: 1h 10m  (was: 1h)

> Provide intellij runConfiguration for Ozone components
> --
>
> Key: HDDS-1522
> URL: https://issues.apache.org/jira/browse/HDDS-1522
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Sometimes I need to start ozone cluster from intellij to debug issues. It's 
> possible but it requires to create many runConfiguration object inside my IDE.
> I propose here to share the intellij specific runtimeConfigs to make it easy 
> for anybody (who uses intellij) to run full ozone cluster from the IDE (1 
> datanode only).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1525) Mapreduce failure when using Hadoop 2.7.5

2019-05-15 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840966#comment-16840966
 ] 

Xiaoyu Yao commented on HDDS-1525:
--

One issue I found is that RPCClient should not implement KeyProviderTokenIssuer 
(Hadoop3 only) because it is shared between BasicOzoneFileSystem and 
OzoneFileSystem. In the case of BasicOzoneFileSystem for Hadoop 2, this 
interface won't be available. 

{code}

public class RpcClient implements ClientProtocol, KeyProviderTokenIssuer {

{code}

> Mapreduce failure when using Hadoop 2.7.5
> -
>
> Key: HDDS-1525
> URL: https://issues.apache.org/jira/browse/HDDS-1525
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
>Reporter: Sammi Chen
>Priority: Major
> Attachments: teragen.log
>
>
> Integrate Ozone(0.4 branch) with Hadoop 2.7.5, "hdfs dfs -ls /" can pass, 
> while teragen  failed. 
> When add  -verbose:class to java options, it shows that class KeyProvider is 
> loaded twice by different classloader while it is only loaded once when 
> execute  "hdfs dfs -ls /" 
> All jars under share/ozone/lib are added into hadoop classpath except ozone 
> file system current lib jar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



libhdfs SIGSEGV during shutdown of Java application

2019-05-15 Thread Peizhao Hu
I am having the same issue as described in this post. 

http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201806.mbox/%3cjira.13160156.1526591063000.135884.1528502880...@atlassian.jira%3E
 


I tried the suggested walk around but it does not work. The program runs 
correctly but the not able detach cleanly.

I have tried to generate a patched libhdfs.so. But, nothing changes. Any 
suggestion?

[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840957#comment-16840957
 ] 

Hadoop QA commented on HDFS-14447:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 
47s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14447 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968855/HDFS-14447-HDFS-13891.09.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux afef2b6026fe 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c395f57 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26796/testReport/ |
| Max. process+thread count | 1005 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26796/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Router should support RefreshUserMappingsProtocol
> 

[jira] [Updated] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-05-15 Thread qiang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiang Liu updated HDFS-14303:
-
Fix Version/s: (was: 3.0.0)

> chek block directory logic not correct when there is only meta file, print no 
> meaning warn log
> --
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Attachments: HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840944#comment-16840944
 ] 

Hadoop QA commented on HDDS-1530:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
54s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m  7s{color} 
| {color:red} Unprocessed flag(s): --jenkins --skip-dir {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2694/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1530 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968858/HDDS-1530.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2694/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.11.0-SNAPSHOT http://yetus.apache.org |


This message was automatically generated.



> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: HDDS-1530.001.patch, HDDS-1530.002.patch
>
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840938#comment-16840938
 ] 

Hadoop QA commented on HDFS-14447:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
38s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 
10s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14447 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968853/HDFS-14447-HDFS-13891.08.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9f9b485f520e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c395f57 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26794/testReport/ |
| Max. process+thread count | 1375 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26794/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> 

[jira] [Updated] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-15 Thread xudongcao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xudongcao updated HDDS-1530:

Attachment: HDDS-1530.002.patch

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: xudongcao
>Assignee: xudongcao
>Priority: Major
> Attachments: HDDS-1530.001.patch, HDDS-1530.002.patch
>
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  4. Remove the process of appending an uuid to each key, which is of little 
> significance and complicates the code, especially when writting with a small 
> buffer repeatedly.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-15 Thread xudongcao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xudongcao updated HDDS-1530:

Description: 
*Current problems:*
 1. Freon does not support big files larger than 2GB because it use an int type 
"keySize" parameter and also "keyValue" buffer size.
 2. Freon allocates a entire buffer for each key at once, so if the key size is 
large and the concurrency is high, freon will report OOM exception frequently.
 3. Freon lacks option such as "--validateWrites", thus users cannot manually 
specify that verification is required after writing.

*Some solutions:*
 1. Use a long type "keySize" parameter, make sure freon can support big files 
larger than 2GB.
 2. Use a small buffer repeatedly than allocating the entire key-size buffer at 
once, the default buffer size is 4K and can be configured by "–bufferSize" 
parameter.
 3. Add a "--validateWrites" option to Freon command line, users can provide 
this option to indicate that a validation is required after write.
 

 

 

  was:
*Current problems:*
 1. Freon does not support big files larger than 2GB because it use an int type 
"keySize" parameter and also "keyValue" buffer size.
 2. Freon allocates a entire buffer for each key at once, so if the key size is 
large and the concurrency is high, freon will report OOM exception frequently.
 3. Freon lacks option such as "--validateWrites", thus users cannot manually 
specify that verification is required after writing.

*Some solutions:*
 1. Use a long type "keySize" parameter, make sure freon can support big files 
larger than 2GB.
 2. Use a small buffer repeatedly than allocating the entire key-size buffer at 
once, the default buffer size is 4K and can be configured by "–bufferSize" 
parameter.
 3. Add a "--validateWrites" option to Freon command line, users can provide 
this option to indicate that a validation is required after write.
 4. Remove the process of appending an uuid to each key, which is of little 
significance and complicates the code, especially when writting with a small 
buffer repeatedly.

 

 

 


> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: xudongcao
>Assignee: xudongcao
>Priority: Major
> Attachments: HDDS-1530.001.patch, HDDS-1530.002.patch
>
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-15 Thread xudongcao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840936#comment-16840936
 ] 

xudongcao commented on HDDS-1530:
-

[~anu]  [~arpitagarwal] I revert the UUID issue in HDDS-1530.002.patch, Please 
review it if you have time.

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: xudongcao
>Assignee: xudongcao
>Priority: Major
> Attachments: HDDS-1530.001.patch
>
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  4. Remove the process of appending an uuid to each key, which is of little 
> significance and complicates the code, especially when writting with a small 
> buffer repeatedly.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9913) DispCp doesn't use Trash with -delete option

2019-05-15 Thread Shen Yinjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840282#comment-16840282
 ] 

Shen Yinjie edited comment on HDFS-9913 at 5/16/19 2:06 AM:


Agree with you, [~ste...@apache.org] , currently docs and codes are not 
consistent,and docs may confuse users. We should update the docs or change 
codes to support  "useTrash" as [~szetszwo] suggested in MAPREDUCE-6597.
[~jzhuge] Are you still working on this issue? I'd like to take over  if you 
dont mind.


was (Author: shenyinjie):
[~ste...@apache.org] Agree with you, currently docs and codes are not 
consistent. We should update the docs or change codes as [~szetszwo] suggested 
in MAPREDUCE-6597.
[~jzhuge] Are you still working on this issue? I'd like to take over  if you 
dont mind.

> DispCp doesn't use Trash with -delete option
> 
>
> Key: HDFS-9913
> URL: https://issues.apache.org/jira/browse/HDFS-9913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: Konstantin Shaposhnikov
>Assignee: John Zhuge
>Priority: Major
>
> Documentation for DistCp -delete option says 
> ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]):
> | The deletion is done by FS Shell. So the trash will be used, if it is 
> enable.
> However it seems to be no longer the case. The latest source code 
> (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java)
>  uses `FileSystem.delete` and trash options seems to be not applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-15 Thread Shen Yinjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HDFS-14447:
---
Attachment: HDFS-14447-HDFS-13891.09.patch

> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, 
> HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, 
> HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, 
> HDFS-14447-HDFS-13891.08.patch, HDFS-14447-HDFS-13891.09.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-05-15 Thread qiang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiang Liu updated HDFS-14303:
-
Target Version/s: 2.9.2  (was: 2.7.3)

> chek block directory logic not correct when there is only meta file, print no 
> meaning warn log
> --
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 3.0.0
>
> Attachments: HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-05-15 Thread qiang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiang Liu updated HDFS-14303:
-
Affects Version/s: 2.8.5

> chek block directory logic not correct when there is only meta file, print no 
> meaning warn log
> --
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 3.0.0
>
> Attachments: HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.

2019-05-15 Thread maobaolong (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840906#comment-16840906
 ] 

maobaolong commented on HDFS-14353:
---

[~elgoiri] Thank you for the code you given, i use it to replace my code. PTAL.

> Erasure Coding: metrics xmitsInProgress become to negative.
> ---
>
> Key: HDFS-14353
> URL: https://issues.apache.org/jira/browse/HDFS-14353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, erasure-coding
>Affects Versions: 3.3.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14353.001.patch, HDFS-14353.002.patch, 
> HDFS-14353.003.patch, HDFS-14353.004.patch, HDFS-14353.005.patch, 
> HDFS-14353.006.patch, HDFS-14353.007.patch, HDFS-14353.008.patch, 
> screenshot-1.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840901#comment-16840901
 ] 

Íñigo Goiri commented on HDFS-14447:


BTW, the comments from [~lukmajercak] are still not addressed.
For exmaple:
{code}
if(router != null) {
{code}
Should be:
{code}
if (router != null) {
{code}
With an extra space.
I'm not sure why checkstyle is not checking those though.

> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, 
> HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, 
> HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, 
> HDFS-14447-HDFS-13891.08.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-05-15 Thread qiang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840902#comment-16840902
 ] 

qiang Liu commented on HDFS-14303:
--

[~hexiaoqiao] thanks your quick response, this is a branch-2 issue, check block 
directory structure logic is removed in branch-3(at least remove from this 
file), so i target at branch-2 instead of truck, is branch-2 EoL too?

> chek block directory logic not correct when there is only meta file, print no 
> meaning warn log
> --
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 2.9.2
> Environment: env free
>Reporter: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 3.0.0
>
> Attachments: HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.

2019-05-15 Thread maobaolong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-14353:
--
Attachment: HDFS-14353.008.patch

> Erasure Coding: metrics xmitsInProgress become to negative.
> ---
>
> Key: HDFS-14353
> URL: https://issues.apache.org/jira/browse/HDFS-14353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, erasure-coding
>Affects Versions: 3.3.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14353.001.patch, HDFS-14353.002.patch, 
> HDFS-14353.003.patch, HDFS-14353.004.patch, HDFS-14353.005.patch, 
> HDFS-14353.006.patch, HDFS-14353.007.patch, HDFS-14353.008.patch, 
> screenshot-1.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-05-15 Thread qiang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiang Liu updated HDFS-14303:
-
Fix Version/s: 3.0.0

> chek block directory logic not correct when there is only meta file, print no 
> meaning warn log
> --
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 2.9.2
> Environment: env free
>Reporter: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 3.0.0
>
> Attachments: HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-05-15 Thread qiang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiang Liu updated HDFS-14303:
-
Affects Version/s: 2.9.2

> chek block directory logic not correct when there is only meta file, print no 
> meaning warn log
> --
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 2.9.2
> Environment: env free
>Reporter: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Attachments: HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-15 Thread Shen Yinjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HDFS-14447:
---
Attachment: HDFS-14447-HDFS-13891.08.patch

> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, 
> HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, 
> HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, 
> HDFS-14447-HDFS-13891.08.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1495) Create hadoop/ozone docker images with inline build process

2019-05-15 Thread Craig Condit (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840870#comment-16840870
 ] 

Craig Condit commented on HDDS-1495:


I realize I'm probably a little late to this party, but maybe that means I'll 
have a fresh perspective on this...

I think it bears looking at what the purpose of the various Docker images we're 
creating are. First, we have the Docker image which is created by 
_*start-build-env.sh*_ in the root of the Hadoop project. This is extremely 
useful for ensuring a consisten_t *build*_ ** environment, but is not suitable 
for public distribution. Additionally, I think there are valid uses for other 
images which include local testing, integration testing, pre-commit testing, 
etc. Finally, we have public images which are meant for end users to build 
upon.**

How exactly each of these images is created is IMO not that important – this 
should probably be dictated by those who are acting as release managers. As a 
local developer, I'd prefer a solution that doesn't involve automatic creation 
of downstream images on every build (even on dist builds). I think using 
something *-Pdocker* to activate building is fine, as it's opt-in rather than 
opt-out.

All of the above applies to Hadoop proper, so now for the Ozone-specific bits...

Since Ozone is meant to run as a plugin to an existing Hadoop installation, I 
think it's worth considering publishing multiple images with different 
underlying Hadoop base versions. Today this might include:
 * ozone:0.4-hadoop-3.0.3
 * ozone:0.4-hadoop-3.1.2
 * ozone:0.4-hadoop-3.2.0

These could serve as reference implementations, suitable for publishing on 
DockerHub,etc. (along with associated Dockerfiles). Distributions and users who 
are more security conscious would probably rebuild based on security patches in 
underlying OS, etc. This pattern tracks closely with how many other complex 
open source projects release images (even openjdk typically comes in several 
different flavors (alpine, ubuntu, slim, etc.).

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HDDS-1495
> URL: https://issues.apache.org/jira/browse/HDDS-1495
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch, 
> HDDS-1495.003.patch, HDDS-1495.004.patch, HDDS-1495.005.patch, 
> HDDS-1495.006.patch, HDDS-1495.007.patch, Hadoop Docker Image inline build 
> process.pdf
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the 

[jira] [Work logged] (HDDS-1461) Optimize listStatus api in OzoneFileSystem

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1461?focusedWorklogId=242942=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242942
 ]

ASF GitHub Bot logged work on HDDS-1461:


Author: ASF GitHub Bot
Created on: 15/May/19 23:04
Start Date: 15/May/19 23:04
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #782: HDDS-1461. 
Optimize listStatus api in OzoneFileSystem
URL: https://github.com/apache/hadoop/pull/782#discussion_r284485092
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1416,44 +1421,47 @@ public void createDirectory(OmKeyArgs args) throws 
IOException {
 try {
   metadataManager.getLock().acquireBucketLock(volumeName, bucketName);
 
 Review comment:
   this needs to be moved out of try block.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242942)
Time Spent: 1h  (was: 50m)

> Optimize listStatus api in OzoneFileSystem
> --
>
> Key: HDDS-1461
> URL: https://issues.apache.org/jira/browse/HDDS-1461
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem, Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently in listStatus we make multiple getFileStatus calls. This can be 
> optimized by converting to a single rpc call for listStatus.
> Also currently listStatus has to traverse a directory recursively in order to 
> list its immediate children. This happens because in OzoneManager all the 
> metadata is stored in rocksdb sorted on keynames. The Jira also aims to fix 
> this by using seek api provided by rocksdb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1461) Optimize listStatus api in OzoneFileSystem

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1461?focusedWorklogId=242940=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242940
 ]

ASF GitHub Bot logged work on HDDS-1461:


Author: ASF GitHub Bot
Created on: 15/May/19 23:03
Start Date: 15/May/19 23:03
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #782: HDDS-1461. 
Optimize listStatus api in OzoneFileSystem
URL: https://github.com/apache/hadoop/pull/782#discussion_r284484826
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1355,15 +1360,15 @@ public OzoneFileStatus getFileStatus(OmKeyArgs args) 
throws IOException {
 String bucketName = args.getBucketName();
 String keyName = args.getKeyName();
 
-metadataManager.getLock().acquireBucketLock(volumeName, bucketName);
 try {
+  metadataManager.getLock().acquireBucketLock(volumeName, bucketName);
 
 Review comment:
   why do we move the acquireBucketLock inside the try block? The original 
pattern seems good to me.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242940)
Time Spent: 50m  (was: 40m)

> Optimize listStatus api in OzoneFileSystem
> --
>
> Key: HDDS-1461
> URL: https://issues.apache.org/jira/browse/HDDS-1461
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem, Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently in listStatus we make multiple getFileStatus calls. This can be 
> optimized by converting to a single rpc call for listStatus.
> Also currently listStatus has to traverse a directory recursively in order to 
> list its immediate children. This happens because in OzoneManager all the 
> metadata is stored in rocksdb sorted on keynames. The Jira also aims to fix 
> this by using seek api provided by rocksdb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242923=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242923
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 22:53
Start Date: 15/May/19 22:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-492853658
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 387 | trunk passed |
   | +1 | compile | 204 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 811 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 125 | trunk passed |
   | 0 | spotbugs | 239 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 416 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 392 | the patch passed |
   | +1 | compile | 193 | the patch passed |
   | +1 | javac | 193 | the patch passed |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 664 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 126 | the patch passed |
   | +1 | findbugs | 431 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 138 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1068 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 5323 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestPipelineClose |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 97f2fd6b2c5c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 77170e7 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/8/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/8/testReport/ |
   | Max. process+thread count | 4479 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242923)
Time Spent: 8h 20m  (was: 8h 10m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer 

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242920=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242920
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:49
Start Date: 15/May/19 22:49
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284481714
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_DEFAULT;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_KEY;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Implementation of ReconTaskController.
+ */
+public class ReconTaskControllerImpl implements ReconTaskController {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReconTaskControllerImpl.class);
+
+  private Map reconDBUpdateTasks;
+  private ExecutorService executorService;
+  private int threadCount = 1;
+  private final Semaphore taskSemaphore = new Semaphore(1);
+  private final ReconOMMetadataManager omMetadataManager;
+  private Map taskFailureCounter = new HashMap<>();
+  private static final int TASK_FAILURE_THRESHOLD = 2;
+  private ReconTaskStatusDao reconTaskStatusDao;
+
+  @Inject
+  public ReconTaskControllerImpl(OzoneConfiguration configuration,
+ ReconOMMetadataManager omMetadataManager,
+ Configuration sqlConfiguration) {
+this.omMetadataManager = omMetadataManager;
+reconDBUpdateTasks = new HashMap<>();
+threadCount = configuration.getInt(OZONE_RECON_TASK_THREAD_COUNT_KEY,
+OZONE_RECON_TASK_THREAD_COUNT_DEFAULT);
+executorService = Executors.newFixedThreadPool(threadCount);
+reconTaskStatusDao = new ReconTaskStatusDao(sqlConfiguration);
+  }
+
+  @Override
+  public void registerTask(ReconDBUpdateTask task) {
+String taskName = task.getTaskName();
+LOG.info("Registered task " + taskName + " with controller.");
+
+// Store task in Task Map.
+reconDBUpdateTasks.put(taskName, task);
+// Store Task in Task failure tracker.
+taskFailureCounter.put(taskName, new AtomicInteger(0));
+// Create DB record for the task.
+ReconTaskStatus reconTaskStatusRecord = new ReconTaskStatus(taskName,
+0L, 0L);
+reconTaskStatusDao.insert(reconTaskStatusRecord);
+  }
+
+  /**
+   * For every registered task, we try process step twice and then reprocess
+   * once (if process failed twice) to absorb the events. If a task has failed
+   * reprocess call more than 2 times across events, it is unregistered
+   * (blacklisted).
+   * @param events set of events
+   * @throws InterruptedException
+   */
+  @Override
+  public void consumeOMEvents(OMUpdateEventBatch events)
+  throws InterruptedException {
+
+
+taskSemaphore.acquire();
+
+try {
+  

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242914=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242914
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:45
Start Date: 15/May/19 22:45
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284480734
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_DEFAULT;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_KEY;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Implementation of ReconTaskController.
+ */
+public class ReconTaskControllerImpl implements ReconTaskController {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReconTaskControllerImpl.class);
+
+  private Map reconDBUpdateTasks;
+  private ExecutorService executorService;
+  private int threadCount = 1;
+  private final Semaphore taskSemaphore = new Semaphore(1);
+  private final ReconOMMetadataManager omMetadataManager;
+  private Map taskFailureCounter = new HashMap<>();
+  private static final int TASK_FAILURE_THRESHOLD = 2;
+  private ReconTaskStatusDao reconTaskStatusDao;
+
+  @Inject
+  public ReconTaskControllerImpl(OzoneConfiguration configuration,
+ ReconOMMetadataManager omMetadataManager,
+ Configuration sqlConfiguration) {
+this.omMetadataManager = omMetadataManager;
+reconDBUpdateTasks = new HashMap<>();
+threadCount = configuration.getInt(OZONE_RECON_TASK_THREAD_COUNT_KEY,
+OZONE_RECON_TASK_THREAD_COUNT_DEFAULT);
+executorService = Executors.newFixedThreadPool(threadCount);
+reconTaskStatusDao = new ReconTaskStatusDao(sqlConfiguration);
+  }
+
+  @Override
+  public void registerTask(ReconDBUpdateTask task) {
+String taskName = task.getTaskName();
+LOG.info("Registered task " + taskName + " with controller.");
+
+// Store task in Task Map.
+reconDBUpdateTasks.put(taskName, task);
+// Store Task in Task failure tracker.
+taskFailureCounter.put(taskName, new AtomicInteger(0));
+// Create DB record for the task.
+ReconTaskStatus reconTaskStatusRecord = new ReconTaskStatus(taskName,
+0L, 0L);
+reconTaskStatusDao.insert(reconTaskStatusRecord);
+  }
+
+  /**
+   * For every registered task, we try process step twice and then reprocess
+   * once (if process failed twice) to absorb the events. If a task has failed
+   * reprocess call more than 2 times across events, it is unregistered
+   * (blacklisted).
+   * @param events set of events
+   * @throws InterruptedException
+   */
+  @Override
+  public void consumeOMEvents(OMUpdateEventBatch events)
+  throws InterruptedException {
+
+
+taskSemaphore.acquire();
+
+try {
+  

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242913=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242913
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:44
Start Date: 15/May/19 22:44
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284480391
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,172 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_DB_DIRS;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.io.File;
+import java.util.Collections;
+
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.persistence.AbstractSqlDatabaseTest;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.apache.hadoop.ozone.recon.recovery.ReconOmMetadataManagerImpl;
+import org.hadoop.ozone.recon.schema.ReconInternalSchemaDefinition;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Class used to test ReconTaskControllerImpl.
+ */
+public class TestReconTaskControllerImpl extends AbstractSqlDatabaseTest {
+
+  private ReconTaskController reconTaskController;
+
+  private Configuration sqlConfiguration;
+  @Before
+  public void setUp() throws Exception {
+
+File omDbDir = temporaryFolder.newFolder();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OZONE_OM_DB_DIRS, omDbDir.getAbsolutePath());
+ReconOMMetadataManager omMetadataManager = new ReconOmMetadataManagerImpl(
+ozoneConfiguration);
+
+sqlConfiguration = getInjector()
+.getInstance(Configuration.class);
+
+ReconInternalSchemaDefinition schemaDefinition = getInjector().
+getInstance(ReconInternalSchemaDefinition.class);
+schemaDefinition.initializeSchema();
+
+reconTaskController = new ReconTaskControllerImpl(ozoneConfiguration,
+omMetadataManager, sqlConfiguration);
+  }
+
+  @Test
+  public void testRegisterTask() throws Exception {
+String taskName = "Dummy_" + System.currentTimeMillis();
+DummyReconDBTask dummyReconDBTask =
+new DummyReconDBTask(taskName, DummyReconDBTask.TaskType.ALWAYS_PASS);
+reconTaskController.registerTask(dummyReconDBTask);
+assertTrue(reconTaskController.getRegisteredTasks().size() == 1);
+assertTrue(reconTaskController.getRegisteredTasks()
+.get(dummyReconDBTask.getTaskName()) == dummyReconDBTask);
+  }
+
+  @Test
+  public void testConsumeOMEvents() throws Exception {
+
+ReconDBUpdateTask reconDBUpdateTaskMock = mock(ReconDBUpdateTask.class);
+when(reconDBUpdateTaskMock.getTablesListeningOn()).thenReturn(Collections
+.EMPTY_LIST);
+when(reconDBUpdateTaskMock.getTaskName()).thenReturn("MockTask");
+when(reconDBUpdateTaskMock.process(any(OMUpdateEventBatch.class)))
+.thenReturn(new ImmutablePair<>("MockTask", true));
+reconTaskController.registerTask(reconDBUpdateTaskMock);
+reconTaskController.consumeOMEvents(
+new OMUpdateEventBatch(Collections.emptyList()));
+
+verify(reconDBUpdateTaskMock, times(1))
+.process(any());
+  }
+
+  @Test
+  public void 

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242909=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242909
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:42
Start Date: 15/May/19 22:42
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284479782
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestContainerKeyMapperTask.java
 ##
 @@ -176,6 +163,130 @@ public void testRun() throws Exception{
 keyPrefixesForContainer.get(containerKeyPrefix).intValue());
   }
 
+  @Test
+  public void testProcess() throws IOException {
 
 Review comment:
   Can you rename to what is this processing?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242909)
Time Spent: 2h 10m  (was: 2h)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840829#comment-16840829
 ] 

Giovanni Matteo Fumarola commented on HDFS-14490:
-

Thanks [~elgoiri] for the review. [^HDFS-14490-HDFS-13891-03.patch] looks ok.
Thanks [~ayushtkn] for working on this. 
Committed to the branch.

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch, HDFS-14490-HDFS-13891-03.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242904=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242904
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:23
Start Date: 15/May/19 22:23
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284474899
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_DEFAULT;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_KEY;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Implementation of ReconTaskController.
+ */
+public class ReconTaskControllerImpl implements ReconTaskController {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReconTaskControllerImpl.class);
+
+  private Map reconDBUpdateTasks;
+  private ExecutorService executorService;
+  private int threadCount = 1;
+  private final Semaphore taskSemaphore = new Semaphore(1);
+  private final ReconOMMetadataManager omMetadataManager;
+  private Map taskFailureCounter = new HashMap<>();
+  private static final int TASK_FAILURE_THRESHOLD = 2;
+  private ReconTaskStatusDao reconTaskStatusDao;
+
+  @Inject
+  public ReconTaskControllerImpl(OzoneConfiguration configuration,
+ ReconOMMetadataManager omMetadataManager,
+ Configuration sqlConfiguration) {
+this.omMetadataManager = omMetadataManager;
+reconDBUpdateTasks = new HashMap<>();
+threadCount = configuration.getInt(OZONE_RECON_TASK_THREAD_COUNT_KEY,
+OZONE_RECON_TASK_THREAD_COUNT_DEFAULT);
+executorService = Executors.newFixedThreadPool(threadCount);
+reconTaskStatusDao = new ReconTaskStatusDao(sqlConfiguration);
+  }
+
+  @Override
+  public void registerTask(ReconDBUpdateTask task) {
+String taskName = task.getTaskName();
+LOG.info("Registered task " + taskName + " with controller.");
+
+// Store task in Task Map.
+reconDBUpdateTasks.put(taskName, task);
+// Store Task in Task failure tracker.
+taskFailureCounter.put(taskName, new AtomicInteger(0));
+// Create DB record for the task.
+ReconTaskStatus reconTaskStatusRecord = new ReconTaskStatus(taskName,
+0L, 0L);
+reconTaskStatusDao.insert(reconTaskStatusRecord);
+  }
+
+  /**
+   * For every registered task, we try process step twice and then reprocess
+   * once (if process failed twice) to absorb the events. If a task has failed
+   * reprocess call more than 2 times across events, it is unregistered
+   * (blacklisted).
+   * @param events set of events
+   * @throws InterruptedException
+   */
+  @Override
+  public void consumeOMEvents(OMUpdateEventBatch events)
+  throws InterruptedException {
+
+
+taskSemaphore.acquire();
+
+try {
+  

[jira] [Updated] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HDFS-14490:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch, HDFS-14490-HDFS-13891-03.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242903=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242903
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:22
Start Date: 15/May/19 22:22
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284474695
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_DEFAULT;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_KEY;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Implementation of ReconTaskController.
+ */
+public class ReconTaskControllerImpl implements ReconTaskController {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReconTaskControllerImpl.class);
+
+  private Map reconDBUpdateTasks;
+  private ExecutorService executorService;
+  private int threadCount = 1;
+  private final Semaphore taskSemaphore = new Semaphore(1);
+  private final ReconOMMetadataManager omMetadataManager;
+  private Map taskFailureCounter = new HashMap<>();
+  private static final int TASK_FAILURE_THRESHOLD = 2;
+  private ReconTaskStatusDao reconTaskStatusDao;
+
+  @Inject
+  public ReconTaskControllerImpl(OzoneConfiguration configuration,
+ ReconOMMetadataManager omMetadataManager,
+ Configuration sqlConfiguration) {
+this.omMetadataManager = omMetadataManager;
+reconDBUpdateTasks = new HashMap<>();
+threadCount = configuration.getInt(OZONE_RECON_TASK_THREAD_COUNT_KEY,
+OZONE_RECON_TASK_THREAD_COUNT_DEFAULT);
+executorService = Executors.newFixedThreadPool(threadCount);
+reconTaskStatusDao = new ReconTaskStatusDao(sqlConfiguration);
+  }
+
+  @Override
+  public void registerTask(ReconDBUpdateTask task) {
+String taskName = task.getTaskName();
+LOG.info("Registered task " + taskName + " with controller.");
+
+// Store task in Task Map.
+reconDBUpdateTasks.put(taskName, task);
+// Store Task in Task failure tracker.
+taskFailureCounter.put(taskName, new AtomicInteger(0));
+// Create DB record for the task.
+ReconTaskStatus reconTaskStatusRecord = new ReconTaskStatus(taskName,
+0L, 0L);
+reconTaskStatusDao.insert(reconTaskStatusRecord);
+  }
+
+  /**
+   * For every registered task, we try process step twice and then reprocess
+   * once (if process failed twice) to absorb the events. If a task has failed
+   * reprocess call more than 2 times across events, it is unregistered
+   * (blacklisted).
+   * @param events set of events
+   * @throws InterruptedException
+   */
+  @Override
+  public void consumeOMEvents(OMUpdateEventBatch events)
+  throws InterruptedException {
+
+
+taskSemaphore.acquire();
+
+try {
+  

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242900=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242900
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:20
Start Date: 15/May/19 22:20
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284474114
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_DEFAULT;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_KEY;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Implementation of ReconTaskController.
+ */
+public class ReconTaskControllerImpl implements ReconTaskController {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReconTaskControllerImpl.class);
+
+  private Map reconDBUpdateTasks;
+  private ExecutorService executorService;
+  private int threadCount = 1;
+  private final Semaphore taskSemaphore = new Semaphore(1);
+  private final ReconOMMetadataManager omMetadataManager;
+  private Map taskFailureCounter = new HashMap<>();
+  private static final int TASK_FAILURE_THRESHOLD = 2;
+  private ReconTaskStatusDao reconTaskStatusDao;
+
+  @Inject
+  public ReconTaskControllerImpl(OzoneConfiguration configuration,
+ ReconOMMetadataManager omMetadataManager,
+ Configuration sqlConfiguration) {
+this.omMetadataManager = omMetadataManager;
+reconDBUpdateTasks = new HashMap<>();
+threadCount = configuration.getInt(OZONE_RECON_TASK_THREAD_COUNT_KEY,
+OZONE_RECON_TASK_THREAD_COUNT_DEFAULT);
+executorService = Executors.newFixedThreadPool(threadCount);
+reconTaskStatusDao = new ReconTaskStatusDao(sqlConfiguration);
+  }
+
+  @Override
+  public void registerTask(ReconDBUpdateTask task) {
+String taskName = task.getTaskName();
+LOG.info("Registered task " + taskName + " with controller.");
+
+// Store task in Task Map.
+reconDBUpdateTasks.put(taskName, task);
+// Store Task in Task failure tracker.
+taskFailureCounter.put(taskName, new AtomicInteger(0));
+// Create DB record for the task.
+ReconTaskStatus reconTaskStatusRecord = new ReconTaskStatus(taskName,
+0L, 0L);
+reconTaskStatusDao.insert(reconTaskStatusRecord);
+  }
+
+  /**
+   * For every registered task, we try process step twice and then reprocess
+   * once (if process failed twice) to absorb the events. If a task has failed
+   * reprocess call more than 2 times across events, it is unregistered
+   * (blacklisted).
+   * @param events set of events
+   * @throws InterruptedException
+   */
+  @Override
+  public void consumeOMEvents(OMUpdateEventBatch events)
+  throws InterruptedException {
+
+
+taskSemaphore.acquire();
+
+try {
+  

[jira] [Commented] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840822#comment-16840822
 ] 

Íñigo Goiri commented on HDFS-14490:


Thanks [~ayushtkn] for the update.
+1 on  [^HDFS-14490-HDFS-13891-03.patch].

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch, HDFS-14490-HDFS-13891-03.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1520) Ozone Classpath does not scale well using long path prefix to OZONE_HOME

2019-05-15 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840819#comment-16840819
 ] 

Eric Yang commented on HDDS-1520:
-

[~anu] Updated.

> Ozone Classpath does not scale well using long path prefix to OZONE_HOME
> 
>
> Key: HDDS-1520
> URL: https://issues.apache.org/jira/browse/HDDS-1520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Elek, Marton
>Priority: Major
>
> [~eyang] reported that the ozone can't be started if the ozone directory is 
> symlinked due to the classpath assembly. It should work even if the directory 
> is symlinked.
> Ozone script generates CLASSPATH environment variable based on parsing of 
> ozone-0.5.0-SNAPSHOT/share/ozone/classpath/*.classpath file, then substitute 
> HDDS_LIB_JARS_DIR with OZONE_HOME/share/ozone/lib.  When OZONE_HOME directory 
> path has a long prefix, this can cause CLASSPATH environment variable to 
> exceed string length limit set for a single environment variable.  This cause 
> Ozone command to return random errors with class not found exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1520) Ozone Classpath does not scale well using long path prefix to OZONE_HOME

2019-05-15 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1520:

Description: 
[~eyang] reported that the ozone can't be started if the ozone directory is 
symlinked due to the classpath assembly. It should work even if the directory 
is symlinked.

Ozone script generates CLASSPATH environment variable based on parsing of 
ozone-0.5.0-SNAPSHOT/share/ozone/classpath/*.classpath file, then substitute 
HDDS_LIB_JARS_DIR with OZONE_HOME/share/ozone/lib.  When OZONE_HOME directory 
path has a long prefix, this can cause CLASSPATH environment variable to exceed 
string length limit set for a single environment variable.  This cause Ozone 
command to return random errors with class not found exception.

  was:[~eyang] reported that the ozone can't be started if the ozone directory 
is symlinked due to the classpath assembly. It should work even if the 
directory is symlinked.


> Ozone Classpath does not scale well using long path prefix to OZONE_HOME
> 
>
> Key: HDDS-1520
> URL: https://issues.apache.org/jira/browse/HDDS-1520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Elek, Marton
>Priority: Major
>
> [~eyang] reported that the ozone can't be started if the ozone directory is 
> symlinked due to the classpath assembly. It should work even if the directory 
> is symlinked.
> Ozone script generates CLASSPATH environment variable based on parsing of 
> ozone-0.5.0-SNAPSHOT/share/ozone/classpath/*.classpath file, then substitute 
> HDDS_LIB_JARS_DIR with OZONE_HOME/share/ozone/lib.  When OZONE_HOME directory 
> path has a long prefix, this can cause CLASSPATH environment variable to 
> exceed string length limit set for a single environment variable.  This cause 
> Ozone command to return random errors with class not found exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242896=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242896
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:13
Start Date: 15/May/19 22:13
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284472229
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMDBUpdatesHandler.java
 ##
 @@ -0,0 +1,216 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.BUCKET_TABLE;
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.KEY_TABLE;
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.VOLUME_TABLE;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.utils.db.CodecRegistry;
+import org.rocksdb.RocksDBException;
+import org.rocksdb.WriteBatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Class used to listen on OM RocksDB updates.
+ */
+public class OMDBUpdatesHandler extends WriteBatch.Handler{
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMDBUpdatesHandler.class);
+
+  private OMMetadataManager omMetadataManager;
+  private Map tablesNames;
+  private CodecRegistry codecRegistry;
+  private List omdbUpdateEvents = new ArrayList<>();
+
+  public OMDBUpdatesHandler(OMMetadataManager omMetadataManager) {
+this.omMetadataManager = omMetadataManager;
+tablesNames = omMetadataManager.getStore().getTableNames();
+codecRegistry = omMetadataManager.getStore().getCodecRegistry();
+  }
+
+  @Override
+  public void put(int cfIndex, byte[] keyBytes, byte[] valueBytes) throws
+  RocksDBException {
+try {
+  String tableName = tablesNames.get(cfIndex);
+  Class keyType = getKeyType(tableName);
+  Class valueType = getValueType(tableName);
+  if (valueType == null) {
+return;
+  }
+  Object key = codecRegistry.asObject(keyBytes, keyType);
+  Object value = codecRegistry.asObject(valueBytes, valueType);
+  OMDBUpdateEvent.OMUpdateEventBuilder builder =
+  new OMDBUpdateEvent.OMUpdateEventBuilder<>();
+  builder.setTable(tableName);
+  builder.setKey(key);
+  builder.setValue(value);
+  builder.setAction(OMDBUpdateEvent.OMDBUpdateAction.PUT);
+  OMDBUpdateEvent putEvent = builder.build();
+  // Temporarily adding to an event buffer for testing. In subsequent 
JIRAs,
+  // a Recon side class will be implemented that requests delta updates
+  // from OM and calls on this handler. In that case, we will fill up
+  // this buffer and pass it on to the ReconTaskController which has
+  // tasks waiting on OM events.
+  omdbUpdateEvents.add(putEvent);
+  LOG.info("Generated OM update Event for table : " + putEvent.getTable()
+  + ", Key = " + putEvent.getKey());
+} catch (IOException ioEx) {
+  LOG.error("Exception when reading key : " + ioEx);
+}
+  }
+
+  @Override
+  public void delete(int cfIndex, byte[] keyBytes) throws RocksDBException {
+try {
+  String tableName = tablesNames.get(cfIndex);
+  Class keyType = getKeyType(tableName);
+  Object key = codecRegistry.asObject(keyBytes, keyType);
+  OMDBUpdateEvent.OMUpdateEventBuilder builder =
+  new OMDBUpdateEvent.OMUpdateEventBuilder<>();
+  builder.setTable(tableName);
+  builder.setKey(key);
+  builder.setAction(OMDBUpdateEvent.OMDBUpdateAction.DELETE);
+  OMDBUpdateEvent deleteEvent = 

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242895=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242895
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:12
Start Date: 15/May/19 22:12
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284472028
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMDBUpdatesHandler.java
 ##
 @@ -0,0 +1,216 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.BUCKET_TABLE;
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.KEY_TABLE;
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.VOLUME_TABLE;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.utils.db.CodecRegistry;
+import org.rocksdb.RocksDBException;
+import org.rocksdb.WriteBatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Class used to listen on OM RocksDB updates.
+ */
+public class OMDBUpdatesHandler extends WriteBatch.Handler{
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMDBUpdatesHandler.class);
+
+  private OMMetadataManager omMetadataManager;
+  private Map tablesNames;
+  private CodecRegistry codecRegistry;
+  private List omdbUpdateEvents = new ArrayList<>();
+
+  public OMDBUpdatesHandler(OMMetadataManager omMetadataManager) {
+this.omMetadataManager = omMetadataManager;
+tablesNames = omMetadataManager.getStore().getTableNames();
+codecRegistry = omMetadataManager.getStore().getCodecRegistry();
+  }
+
+  @Override
+  public void put(int cfIndex, byte[] keyBytes, byte[] valueBytes) throws
+  RocksDBException {
+try {
+  String tableName = tablesNames.get(cfIndex);
+  Class keyType = getKeyType(tableName);
+  Class valueType = getValueType(tableName);
+  if (valueType == null) {
+return;
+  }
+  Object key = codecRegistry.asObject(keyBytes, keyType);
+  Object value = codecRegistry.asObject(valueBytes, valueType);
+  OMDBUpdateEvent.OMUpdateEventBuilder builder =
+  new OMDBUpdateEvent.OMUpdateEventBuilder<>();
+  builder.setTable(tableName);
+  builder.setKey(key);
+  builder.setValue(value);
+  builder.setAction(OMDBUpdateEvent.OMDBUpdateAction.PUT);
+  OMDBUpdateEvent putEvent = builder.build();
+  // Temporarily adding to an event buffer for testing. In subsequent 
JIRAs,
+  // a Recon side class will be implemented that requests delta updates
+  // from OM and calls on this handler. In that case, we will fill up
+  // this buffer and pass it on to the ReconTaskController which has
+  // tasks waiting on OM events.
+  omdbUpdateEvents.add(putEvent);
+  LOG.info("Generated OM update Event for table : " + putEvent.getTable()
+  + ", Key = " + putEvent.getKey());
+} catch (IOException ioEx) {
+  LOG.error("Exception when reading key : " + ioEx);
+}
+  }
+
+  @Override
+  public void delete(int cfIndex, byte[] keyBytes) throws RocksDBException {
+try {
+  String tableName = tablesNames.get(cfIndex);
 
 Review comment:
   Looks like a repetitive code, a Builder pattern for EventInfo would be nice.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242892=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242892
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:10
Start Date: 15/May/19 22:10
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284471626
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMDBUpdatesHandler.java
 ##
 @@ -0,0 +1,216 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.BUCKET_TABLE;
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.KEY_TABLE;
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.VOLUME_TABLE;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.utils.db.CodecRegistry;
+import org.rocksdb.RocksDBException;
+import org.rocksdb.WriteBatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Class used to listen on OM RocksDB updates.
+ */
+public class OMDBUpdatesHandler extends WriteBatch.Handler{
 
 Review comment:
   missing space
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242892)
Time Spent: 1h 10m  (was: 1h)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=242891=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242891
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 15/May/19 22:08
Start Date: 15/May/19 22:08
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284470960
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServerConfigKeys.java
 ##
 @@ -112,6 +112,10 @@
   public static final String OZONE_RECON_SQL_MAX_IDLE_CONNECTION_TEST_STMT =
   "ozone.recon.sql.db.conn.idle.test";
 
+  public static final String OZONE_RECON_TASK_THREAD_COUNT_KEY =
+  "ozone.recon.task.thread.count";
+  public static final int OZONE_RECON_TASK_THREAD_COUNT_DEFAULT = 1;
 
 Review comment:
   Why create a fix sized threadpool vs create a thread per task?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242891)
Time Spent: 1h  (was: 50m)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1520) Ozone Classpath does not scale well using long path prefix to OZONE_HOME

2019-05-15 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1520:

Summary: Ozone Classpath does not scale well using long path prefix to 
OZONE_HOME  (was: Classpath assembly doesn't work if ozone is symlinked)

> Ozone Classpath does not scale well using long path prefix to OZONE_HOME
> 
>
> Key: HDDS-1520
> URL: https://issues.apache.org/jira/browse/HDDS-1520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Elek, Marton
>Priority: Major
>
> [~eyang] reported that the ozone can't be started if the ozone directory is 
> symlinked due to the classpath assembly. It should work even if the directory 
> is symlinked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840805#comment-16840805
 ] 

Hadoop QA commented on HDFS-14490:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
10s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m  
1s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14490 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968838/HDFS-14490-HDFS-13891-03.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9bd84c1e7a90 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 3d4e957 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26793/testReport/ |
| Max. process+thread count | 1083 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26793/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
>  

[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242880=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242880
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 21:38
Start Date: 15/May/19 21:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #798: 
HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r284462072
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -71,6 +96,27 @@ public boolean isEmpty() throws IOException {
 
   @Override
   public VALUE get(KEY key) throws IOException {
+// Here the metadata lock will guarantee that cache is not updated for same
+// key during get key.
+if (cache != null) {
+  CacheValue cacheValue = cache.get(new CacheKey<>(key));
+  if (cacheValue == null) {
+return getFromTable(key);
+  } else {
+// Doing this because, if the Cache Value Last operation is deleted
+// means it will eventually removed from DB. So, we should return null.
+if (cacheValue.getLastOperation() != CacheValue.OperationType.DELETED) 
{
+  return cacheValue.getValue();
+} else {
+  return null;
+}
+  }
+} else {
+  return getFromTable(key);
 
 Review comment:
   Understood the comment, updated the code to remove getTable in multiple 
places.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242880)
Time Spent: 8h 10m  (was: 8h)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242878=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242878
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 21:37
Start Date: 15/May/19 21:37
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #798: 
HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r284461812
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+
+
+/**
+ * This is used for the tables where we don't want to cache entire table in
+ * in-memory.
+ */
+@Private
+@Evolving
+public class PartialTableCache
+implements TableCache{
+
+  private final ConcurrentHashMap cache;
+  private final TreeSet> epochEntries;
+  private ExecutorService executorService;
+
+
+
+  public PartialTableCache() {
+cache = new ConcurrentHashMap<>();
+epochEntries = new TreeSet>();
+// Created a singleThreadExecutor, so one cleanup will be running at a
+// time.
+executorService = Executors.newSingleThreadExecutor();
+  }
+
+  @Override
+  public CACHEVALUE get(CACHEKEY cachekey) {
+return cache.get(cachekey);
+  }
+
+  @Override
+  public void put(CACHEKEY cacheKey, CACHEVALUE value) {
+cache.put(cacheKey, value);
+CacheValue cacheValue = (CacheValue) cache.get(cacheKey);
+epochEntries.add(new EpochEntry<>(cacheValue.getEpoch(), cacheKey));
+  }
+
+  @Override
+  public void cleanup(long epoch) {
+executorService.submit(() -> evictCache(epoch));
+  }
+
+  @Override
+  public int size() {
+return cache.size();
+  }
+
+  private void evictCache(long epoch) {
 
 Review comment:
   Yes Key will be evicted once double buffer flushes to disk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242878)
Time Spent: 8h  (was: 7h 50m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?focusedWorklogId=242876=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242876
 ]

ASF GitHub Bot logged work on HDDS-1527:


Author: ASF GitHub Bot
Created on: 15/May/19 21:36
Start Date: 15/May/19 21:36
Worklog Time Spent: 10m 
  Work Description: swagle commented on issue #822: HDDS-1527. HDDS 
Datanode start fails due to datanode.id file read error
URL: https://github.com/apache/hadoop/pull/822#issuecomment-492832955
 
 
   Unit test failures are unrelated. @arp7 Can you review this change, please?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242876)
Time Spent: 0.5h  (was: 20m)

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242874=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242874
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 21:31
Start Date: 15/May/19 21:31
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-492831553
 
 
   Thank You @arp7 for offline discussion.
   I have addressed review the comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242874)
Time Spent: 7h 50m  (was: 7h 40m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242871=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242871
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 21:27
Start Date: 15/May/19 21:27
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #798: 
HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r284458211
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -245,42 +247,50 @@ protected DBStoreBuilder 
addOMTablesAndCodecs(DBStoreBuilder builder) {
*/
   protected void initializeOmTables() throws IOException {
 userTable =
-this.store.getTable(USER_TABLE, String.class, VolumeList.class);
+this.store.getTable(USER_TABLE, String.class, VolumeList.class,
 
 Review comment:
   Done. Now caller's no need to specify CacheType.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242871)
Time Spent: 7h 40m  (was: 7.5h)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242870=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242870
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 21:26
Start Date: 15/May/19 21:26
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #798: 
HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r284458025
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/TableCache.java
 ##
 @@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+/**
+ * Cache used for RocksDB tables.
+ * @param 
+ * @param 
+ */
+
+@Private
+@Evolving
+public interface TableCache {
+
+  /**
+   * Return the value for the key if it is present, otherwise return null.
+   * @param cacheKey
+   * @return CACHEVALUE
+   */
+  CACHEVALUE get(CACHEKEY cacheKey);
+
+  /**
+   * Add an entry to the cache, if the key already exists it overrides.
+   * @param cacheKey
+   * @param value
+   */
+  void put(CACHEKEY cacheKey, CACHEVALUE value);
+
+  /**
+   * Removes all the entries from the cache which are having epoch value less
+   * than or equal to specified epoch value.
+   * @param epoch
+   */
+  void cleanup(long epoch);
+
+  /**
+   * Return the size of the cache.
+   * @return size
+   */
+  int size();
+
+  /**
+   * Defines type of cache need to be used by OM RocksDB tables.
+   */
+  enum CACHETYPE {
 
 Review comment:
   Removed FullCache, and have the only implementation for TableCache i.e 
PartialCache.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242870)
Time Spent: 7.5h  (was: 7h 20m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242869=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242869
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 21:25
Start Date: 15/May/19 21:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #798: 
HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r284457687
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/FullTableCache.java
 ##
 @@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * This is the full table cache, where it uses concurrentHashMap internally,
+ * and does not do any evict or cleanup. This full table cache need to be
+ * used by tables where we want to cache the entire table with out any
+ * cleanup to the cache
+ * @param 
+ * @param 
+ */
+
+@Private
+@Evolving
+public class FullTableCache OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242868=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242868
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 21:25
Start Date: 15/May/19 21:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #798: 
HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r284457474
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -31,22 +38,40 @@
  */
 public class TypedTable implements Table {
 
-  private Table rawTable;
+  private final Table rawTable;
+
+  private final CodecRegistry codecRegistry;
 
-  private CodecRegistry codecRegistry;
+  private final Class keyType;
 
-  private Class keyType;
+  private final Class valueType;
 
-  private Class valueType;
+  private final TableCache, CacheValue> cache;
 
   public TypedTable(
   Table rawTable,
   CodecRegistry codecRegistry, Class keyType,
   Class valueType) {
+this(rawTable, codecRegistry, keyType, valueType,
+null);
+  }
+
+
+  public TypedTable(
+  Table rawTable,
+  CodecRegistry codecRegistry, Class keyType,
+  Class valueType, TableCache.CACHETYPE cachetype) {
 this.rawTable = rawTable;
 this.codecRegistry = codecRegistry;
 this.keyType = keyType;
 this.valueType = valueType;
+if (cachetype == TableCache.CACHETYPE.FULLCACHE) {
 
 Review comment:
   Thanks, Anu for the comment, removed the cache type.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242868)
Time Spent: 7h 10m  (was: 7h)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242867
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 21:24
Start Date: 15/May/19 21:24
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #798: 
HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r284457204
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
 ##
 @@ -44,17 +45,20 @@
*/
   Table getTable(String name) throws IOException;
 
+
   /**
* Gets an existing TableStore with implicit key/value conversion.
*
* @param name - Name of the TableStore to get
* @param keyType
* @param valueType
+   * @param cachetype - Type of cache need to be used for this table.
* @return - TableStore.
* @throws IOException on Failure
*/
Table getTable(String name,
-  Class keyType, Class valueType) throws IOException;
+  Class keyType, Class valueType,
+  TableCache.CACHETYPE cachetype) throws IOException;
 
 Review comment:
   Removed the cache type. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242867)
Time Spent: 7h  (was: 6h 50m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?focusedWorklogId=242859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242859
 ]

ASF GitHub Bot logged work on HDDS-1527:


Author: ASF GitHub Bot
Created on: 15/May/19 21:16
Start Date: 15/May/19 21:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #822: HDDS-1527. HDDS 
Datanode start fails due to datanode.id file read error
URL: https://github.com/apache/hadoop/pull/822#issuecomment-492826896
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 421 | trunk passed |
   | +1 | compile | 203 | trunk passed |
   | +1 | checkstyle | 56 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 826 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 129 | trunk passed |
   | 0 | spotbugs | 235 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 418 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 396 | the patch passed |
   | +1 | compile | 210 | the patch passed |
   | +1 | javac | 210 | the patch passed |
   | +1 | checkstyle | 61 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 124 | the patch passed |
   | +1 | findbugs | 428 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 140 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1105 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 5375 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-822/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/822 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 30d6347eeaad 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 77170e7 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-822/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-822/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-822/1/testReport/ |
   | Max. process+thread count | 5289 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-822/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242859)
Time Spent: 20m  (was: 10m)

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: 

[jira] [Commented] (HDDS-750) Write Security audit entry to track activities related to Private Keys and certificates

2019-05-15 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840763#comment-16840763
 ] 

Dinesh Chitlangia commented on HDDS-750:


[~ajayydv] Thanks for filing this. Following our previous discussion, the 
prerequisite tasks have been completed or are still in progress ?

> Write Security audit entry to track activities related to Private Keys and  
> certificates
> 
>
> Key: HDDS-750
> URL: https://issues.apache.org/jira/browse/HDDS-750
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> Write Security Audit entry to track security tasks performed on SCM, OM and 
> DN.
> Tasks:
> * Private Keys: bootstrap/rotation
> * Certificates: CSR submission, rotation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2019-05-15 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDFS-9059.
-
   Resolution: Fixed
Fix Version/s: 3.1.0

> Expose lssnapshottabledir via WebHDFS
> -
>
> Key: HDFS-9059
> URL: https://issues.apache.org/jira/browse/HDFS-9059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 3.1.0
>
>
> lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-05-15 Thread Plamen Jeliazkov (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840751#comment-16840751
 ] 

Plamen Jeliazkov commented on HDFS-12979:
-

Thanks! I agree, [~xkrogen]. I think it's unnecessary at this point.

My own experience / justification: For NameNodes that run alongside other 
applications too these tend to be smaller clusters; where "overload" would 
likely be a non-issue. If this ever became something to address it would be a 
pretty straightforward change anyway.

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840726#comment-16840726
 ] 

Ayush Saxena commented on HDFS-14490:
-

Uploaded patch v3 handling comments.

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch, HDFS-14490-HDFS-13891-03.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14490:

Attachment: HDFS-14490-HDFS-13891-03.patch

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch, HDFS-14490-HDFS-13891-03.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840712#comment-16840712
 ] 

Hadoop QA commented on HDFS-14490:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
46s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m  7s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterQuota |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14490 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968827/HDFS-14490-HDFS-13891-02.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bcd0defe199a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 5439c4c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26792/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26792/testReport/ |
| Max. process+thread count | 997 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242819=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242819
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 19:56
Start Date: 15/May/19 19:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-492800469
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 392 | trunk passed |
   | +1 | compile | 201 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 810 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 132 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 415 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 388 | the patch passed |
   | +1 | compile | 206 | the patch passed |
   | +1 | javac | 206 | the patch passed |
   | +1 | checkstyle | 61 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 664 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | the patch passed |
   | +1 | findbugs | 435 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 147 | hadoop-hdds in the patch failed. |
   | -1 | unit | 846 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 5192 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cba0149be2ba 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9569015 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/6/testReport/ |
   | Max. process+thread count | 5138 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242819)
Time Spent: 6h 50m  (was: 6h 40m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place 

[jira] [Updated] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-15 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1527:
--
Status: Patch Available  (was: Open)

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-05-15 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840701#comment-16840701
 ] 

Erik Krogen commented on HDFS-12979:


Very interesting observation [~zero45]! If I understand correctly, the transfer 
throttling is primarily for the benefit of the active, to avoid overloading it. 
I don't think we have much concern around overloading the standby node, since 
they don't serve traffic. The current approach will still protect the 
active/observer nodes, so I don't think it's necessary to make the throttling 
global. What do you think?

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?focusedWorklogId=242811=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242811
 ]

ASF GitHub Bot logged work on HDDS-1527:


Author: ASF GitHub Bot
Created on: 15/May/19 19:45
Start Date: 15/May/19 19:45
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #822: HDDS-1527. HDDS 
Datanode start fails due to datanode.id file read error
URL: https://github.com/apache/hadoop/pull/822
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242811)
Time Spent: 10m
Remaining Estimate: 0h

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1527:
-
Labels: pull-request-available  (was: )

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-15 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1527:
--
Summary: HDDS Datanode start fails due to datanode.id file read errors  
(was: HDDS Datanode start fails due to datanode.id file read errors. )

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-05-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840698#comment-16840698
 ] 

Hudson commented on HDFS-14390:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16555 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16555/])
HDFS-14390. Provide kerberos support for AliasMap service used by (virajith: 
rev 77170e70d16e309121ca7730974617c05e66d063)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/TestSecureAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> Provide kerberos support for AliasMap service used by Provided storage
> --
>
> Key: HDFS-14390
> URL: https://issues.apache.org/jira/browse/HDFS-14390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashvin
>Assignee: Ashvin
>Priority: Major
> Attachments: HDFS-14390.001.patch, HDFS-14390.002.patch, 
> HDFS-14390.003.patch, HDFS-14390.004.patch, HDFS-14390.005.patch, 
> HDFS-14390.006.patch
>
>
> With {{PROVIDED}} storage (-HDFS-9806)-, HDFS can address data stored in 
> external storage systems. This feature is not supported in a secure HDFS 
> cluster. The {{AliasMap}} service does not support kerberos, and as a result 
> the cluster nodes will fail to communicate with it. This JIRA is to enable 
> kerberos support for the {{AliasMap}} service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14210) RBF: ACL commands should work over all the destinations

2019-05-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14210:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

Committed.
Thanx [~elgoiri] for the review and [~shubham.dewan] for the report.

> RBF: ACL commands should work over all the destinations
> ---
>
> Key: HDFS-14210
> URL: https://issues.apache.org/jira/browse/HDFS-14210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14210-HDFS-13891-04.patch, 
> HDFS-14210-HDFS-13891-05.patch, HDFS-14210-HDFS-13891.002.patch, 
> HDFS-14210-HDFS-13891.003.patch, HDFS-14210.001.patch
>
>
> 1) A mount point with multiple destinations.
> 2) ./bin/hdfs dfs -setfacl -m user:abc:rwx /testacl
> 3) where /testacl => /test1, /test2
> 4) command works for only one destination.
> ACL should be set on both of the destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors.

2019-05-15 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840694#comment-16840694
 ] 

Siddharth Wagle edited comment on HDDS-1527 at 5/15/19 7:29 PM:


The metadata dir fallback issue was not the result of changes that went in with 
this change and since HDDS-1474 is reopened, I will not make any modifications 
in this regards. cc:[~vivekratnavel]


was (Author: swagle):
The metadata fallback issue was not the result of changes that went in with 
this change and since HDDS-1474 is reopened, I will not make any modifications 
in this regards. cc:[~vivekratnavel]

> HDDS Datanode start fails due to datanode.id file read errors. 
> ---
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14210) RBF: ACL commands should work over all the destinations

2019-05-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-14210:
---

Assignee: Ayush Saxena  (was: Shubham Dewan)

> RBF: ACL commands should work over all the destinations
> ---
>
> Key: HDFS-14210
> URL: https://issues.apache.org/jira/browse/HDFS-14210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14210-HDFS-13891-04.patch, 
> HDFS-14210-HDFS-13891-05.patch, HDFS-14210-HDFS-13891.002.patch, 
> HDFS-14210-HDFS-13891.003.patch, HDFS-14210.001.patch
>
>
> 1) A mount point with multiple destinations.
> 2) ./bin/hdfs dfs -setfacl -m user:abc:rwx /testacl
> 3) where /testacl => /test1, /test2
> 4) command works for only one destination.
> ACL should be set on both of the destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors.

2019-05-15 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840694#comment-16840694
 ] 

Siddharth Wagle commented on HDDS-1527:
---

The metadata fallback issue was not the result of changes that went in with 
this change and since HDDS-1474 is reopened, I will not make any modifications 
in this regards. cc:[~vivekratnavel]

> HDDS Datanode start fails due to datanode.id file read errors. 
> ---
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-15 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840690#comment-16840690
 ] 

Siddharth Wagle commented on HDDS-1474:
---

*Note*: when 'ozone.scm.datanode.id' is not configured, the datanode.id file is 
created in a different directory than the fallback dir (ozone.metadata.dirs). 
Restart fails since it looks for the datanode.id in ozone.metadata.dirs.

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14491) More Clarity on Namenode UI Around Blocks and Replicas

2019-05-15 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reassigned HDFS-14491:
-

Assignee: Siyao Meng

> More Clarity on Namenode UI Around Blocks and Replicas
> --
>
> Key: HDFS-14491
> URL: https://issues.apache.org/jira/browse/HDFS-14491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Alan Jackoway
>Assignee: Siyao Meng
>Priority: Minor
>
> I recently deleted more than 1/3 of the files in my HDFS installation. During 
> the process of the delete, I noticed that the NameNode UI near the top has a 
> line like this:
> {quote}44,031,342 files and directories, 38,988,775 blocks = 83,020,117 total 
> filesystem object(s).
> {quote}
> Then lower down had a line like this:
> {quote}Number of Blocks Pending Deletion 4000
> {quote}
> That made it appear that I was deleting more blocks than exist in the 
> cluster. When that number was below the total number of blocks, I briefly 
> believed I had deleted the entire cluster. In reality, the second number 
> includes replicas, while the first does not.
> The UI should be clarified to indicate where "Blocks" includes replicas and 
> where it doesn't. This may also have an impact on the under-replicated count.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-05-15 Thread Plamen Jeliazkov (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840687#comment-16840687
 ] 

Plamen Jeliazkov commented on HDFS-12979:
-

Good work in v006, [~vagarychen]!

I have a concern; perhaps I am wrong: bandwidth throttling. Prior this change 
there was only 1 real 'checkpointReceiver', the Active NameNode. Now there will 
be multiple at the same time; Active and Observer(s). It seems we construct a 
DataTransferThrottler per TransferImage call and there is not shared knowledge 
about throttling so uploads at the same time will use multiples of the 
bandwidth limit set by "dfs.image.transfer.bandwidthPerSec". Does my concern 
make sense?

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-05-15 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840685#comment-16840685
 ] 

Virajith Jalaparti commented on HDFS-14390:
---

Committed  [^HDFS-14390.006.patch] to trunk. Thanks [~ashvin] for the patch and 
[~elgoiri] [~daryn] for the review.

> Provide kerberos support for AliasMap service used by Provided storage
> --
>
> Key: HDFS-14390
> URL: https://issues.apache.org/jira/browse/HDFS-14390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashvin
>Assignee: Ashvin
>Priority: Major
> Attachments: HDFS-14390.001.patch, HDFS-14390.002.patch, 
> HDFS-14390.003.patch, HDFS-14390.004.patch, HDFS-14390.005.patch, 
> HDFS-14390.006.patch
>
>
> With {{PROVIDED}} storage (-HDFS-9806)-, HDFS can address data stored in 
> external storage systems. This feature is not supported in a secure HDFS 
> cluster. The {{AliasMap}} service does not support kerberos, and as a result 
> the cluster nodes will fail to communicate with it. This JIRA is to enable 
> kerberos support for the {{AliasMap}} service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-05-15 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-14390:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Provide kerberos support for AliasMap service used by Provided storage
> --
>
> Key: HDFS-14390
> URL: https://issues.apache.org/jira/browse/HDFS-14390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashvin
>Assignee: Ashvin
>Priority: Major
> Attachments: HDFS-14390.001.patch, HDFS-14390.002.patch, 
> HDFS-14390.003.patch, HDFS-14390.004.patch, HDFS-14390.005.patch, 
> HDFS-14390.006.patch
>
>
> With {{PROVIDED}} storage (-HDFS-9806)-, HDFS can address data stored in 
> external storage systems. This feature is not supported in a secure HDFS 
> cluster. The {{AliasMap}} service does not support kerberos, and as a result 
> the cluster nodes will fail to communicate with it. This JIRA is to enable 
> kerberos support for the {{AliasMap}} service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14494) Move Server logging of StatedId inside receiveRequestState()

2019-05-15 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDFS-14494:
-

Assignee: Shweta

> Move Server logging of StatedId inside receiveRequestState()
> 
>
> Key: HDFS-14494
> URL: https://issues.apache.org/jira/browse/HDFS-14494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Konstantin Shvachko
>Assignee: Shweta
>Priority: Major
>  Labels: newbie++
>
> HDFS-14270 introduced logging of the client and server StateIds in trace 
> level. Unfortunately one of the arguments 
> {{alignmentContext.getLastSeenStateId()}} holds a lock on FSEdits, which is 
> called even if trace logging level is disabled. I propose to move logging 
> message inside {{GlobalStateIdContext.receiveRequestState()}} where 
> {{clientStateId}} and {{serverStateId}} already calculated and can be easily 
> printed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840670#comment-16840670
 ] 

Íñigo Goiri commented on HDFS-14490:


A couple minor comments:
* Avoid the empty change in TestRouterQuota#210
* Why does it say wait 2 seconds? It just triggers the invoke.
* Can we add a file to trigger the exception first and intercept it? After that 
we can do all the current tests. We have other tests for this but this makes it 
easier to follow as we first fail one operation but let the others succeed.
* Add a space before the comment in TestRouterQuota#811.

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1534) freon should return non-zero exit code on failure

2019-05-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840655#comment-16840655
 ] 

Hadoop QA commented on HDDS-1534:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m  7s{color} 
| {color:red} Unprocessed flag(s): --jenkins --skip-dir {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2693/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1534 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968829/HDDS-1534.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2693/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.11.0-SNAPSHOT http://yetus.apache.org |


This message was automatically generated.



> freon should return non-zero exit code on failure
> -
>
> Key: HDDS-1534
> URL: https://issues.apache.org/jira/browse/HDDS-1534
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-1534.001.patch
>
>
> Currently freon does not return any non-zero exit code even on failure.
> The status shows as "Failed" but the exit code is always zero.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14494) Move Server logging of StatedId inside receiveRequestState()

2019-05-15 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14494:
--

 Summary: Move Server logging of StatedId inside 
receiveRequestState()
 Key: HDFS-14494
 URL: https://issues.apache.org/jira/browse/HDFS-14494
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Konstantin Shvachko


HDFS-14270 introduced logging of the client and server StateIds in trace level. 
Unfortunately one of the arguments {{alignmentContext.getLastSeenStateId()}} 
holds a lock on FSEdits, which is called even if trace logging level is 
disabled. I propose to move logging message inside 
{{GlobalStateIdContext.receiveRequestState()}} where {{clientStateId}} and 
{{serverStateId}} already calculated and can be easily printed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1548) Jenkins precommit build is broken for Ozone

2019-05-15 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840653#comment-16840653
 ] 

Eric Yang commented on HDDS-1548:
-

Yetus master has recently committed YETUS-873.  This will make the parameters 
more strict.  By default if unknown parameters are passed, it will cause Yetus 
to exit.  With the recent trim down of parameters, some parameters are no 
longer valid to latest version of Yetus.  This is the root cause of Ozone 
pre-commit build regressions.

> Jenkins precommit build is broken for Ozone
> ---
>
> Key: HDDS-1548
> URL: https://issues.apache.org/jira/browse/HDDS-1548
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Blocker
>
> HDDS Jenkins precommit build has been broken since Build 2685 Date May 13, 
> 2019 11:00:40PM.  It looks like the precommit build depends on Yetus trunk.  
> This is extremely risky when Yetus trunk breaks, it also breaks precommit 
> build for Ozone.  Precommit build must use a released version of Yetus to 
> prevent cascaded regression.
> A second problem is the precommit build also depends on Marton's own personal 
> website to download ozone.sh.  It would be best to version control ozone.sh 
> in hadoop-ozone/dev-support directory to prevent unpredictable changes to 
> ozone.sh at different time, which can make precommit build report 
> indeterministic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1534) freon should return non-zero exit code on failure

2019-05-15 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-1534:
-
Attachment: HDDS-1534.001.patch

> freon should return non-zero exit code on failure
> -
>
> Key: HDDS-1534
> URL: https://issues.apache.org/jira/browse/HDDS-1534
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-1534.001.patch
>
>
> Currently freon does not return any non-zero exit code even on failure.
> The status shows as "Failed" but the exit code is always zero.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1534) freon should return non-zero exit code on failure

2019-05-15 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-1534:
-
Status: Patch Available  (was: Open)

> freon should return non-zero exit code on failure
> -
>
> Key: HDDS-1534
> URL: https://issues.apache.org/jira/browse/HDDS-1534
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-1534.001.patch
>
>
> Currently freon does not return any non-zero exit code even on failure.
> The status shows as "Failed" but the exit code is always zero.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14210) RBF: ACL commands should work over all the destinations

2019-05-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14210:

Summary: RBF: ACL commands should work over all the destinations  (was: 
RBF: ModifyACL should work over all the destinations)

> RBF: ACL commands should work over all the destinations
> ---
>
> Key: HDFS-14210
> URL: https://issues.apache.org/jira/browse/HDFS-14210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Major
> Attachments: HDFS-14210-HDFS-13891-04.patch, 
> HDFS-14210-HDFS-13891-05.patch, HDFS-14210-HDFS-13891.002.patch, 
> HDFS-14210-HDFS-13891.003.patch, HDFS-14210.001.patch
>
>
> 1) A mount point with multiple destinations.
> 2) ./bin/hdfs dfs -setfacl -m user:abc:rwx /testacl
> 3) where /testacl => /test1, /test2
> 4) command works for only one destination.
> ACL should be set on both of the destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840643#comment-16840643
 ] 

Íñigo Goiri commented on HDFS-14447:


Can you fix the whitespace?
There is a couple things broken:
{code}
assertNotEquals("Should be different group: " + g1.get(i) + " and " + 
g3.get(i), g1.get(i).equals(g3.get(i)));
{code}
It's doing a comparisson with the string, this should be:
{code}
assertNotEquals("Should be different group: " + g1.get(i) + " and " + 
g3.get(i), g1.get(i), g3.get(i));
{code}

Then, the log line like:
{code}
LOG.info(g4.toString());
{code}
Should be:
{code}
LOG.info("Group 4: {}", g4);
{code}

In the writer in 362, it would be better to do 
{{writer.println(newResource.toString())}} to make sure the StrinbBuilder uses 
the {{toString()}}.

The javadoc in {{testRefreshSuperUserGroupsConfigurationInternal()}} should not 
finish with a double asterisk.

> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, 
> HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, 
> HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1400) Convert all OM Key related operations to HA model

2019-05-15 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840642#comment-16840642
 ] 

Bharat Viswanadham commented on HDDS-1400:
--

Moving this out, as the design to handle write requests is changed.

> Convert all OM Key related operations to HA model
> -
>
> Key: HDDS-1400
> URL: https://issues.apache.org/jira/browse/HDDS-1400
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which  are not satisfied like when Key not found during rename, 
> these all will be executed during startTransaction, and if it fails these 
> requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1400) Convert all OM Key related operations to HA model

2019-05-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1400:
-
Status: Open  (was: Patch Available)

> Convert all OM Key related operations to HA model
> -
>
> Key: HDDS-1400
> URL: https://issues.apache.org/jira/browse/HDDS-1400
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which  are not satisfied like when Key not found during rename, 
> these all will be executed during startTransaction, and if it fails these 
> requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1406:
-
Status: Patch Available  (was: Open)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1512:
-
Target Version/s: 0.5.0

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242729=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242729
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 18:29
Start Date: 15/May/19 18:29
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-492770431
 
 
   Thank You @arp7 for the review.
   Few of the review comments I have addressed them, for other's provided reply 
for the comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242729)
Time Spent: 6h 40m  (was: 6.5h)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242727=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242727
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 15/May/19 18:28
Start Date: 15/May/19 18:28
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #798: 
HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r284392124
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -69,8 +94,40 @@ public boolean isEmpty() throws IOException {
 return rawTable.isEmpty();
   }
 
+  /**
+   * Returns the value mapped to the given key in byte array or returns null
+   * if the key is not found.
+   *
+   * First it will check from cache, if it has entry return the value
+   * otherwise, get from the RocksDB table.
+   *
+   * @param key metadata key
+   * @return VALUE
+   * @throws IOException
+   */
   @Override
   public VALUE get(KEY key) throws IOException {
+// Here the metadata lock will guarantee that cache is not updated for same
+// key during get key.
+if (cache != null) {
+  CacheValue cacheValue = cache.get(new CacheKey<>(key));
+  if (cacheValue == null) {
+return getFromTable(key);
+  } else {
+// Doing this because, if the Cache Value Last operation is deleted
+// means it will eventually removed from DB. So, we should return null.
+if (cacheValue.getLastOperation() != CacheValue.OperationType.DELETED) 
{
+  return cacheValue.getValue();
+} else {
+  return null;
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 242727)
Time Spent: 6.5h  (was: 6h 20m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840640#comment-16840640
 ] 

Ayush Saxena commented on HDFS-14490:
-

Thanx [~elgoiri] for the review.
Added couple of test cases to verify. 

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14490:

Attachment: HDFS-14490-HDFS-13891-02.patch

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1548) Jenkins precommit build is broken for Ozone

2019-05-15 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840639#comment-16840639
 ] 

Eric Yang commented on HDDS-1548:
-

[~elek] Can we make the following changes:

{code}
- curl -L https://api.github.com/repos/apache/yetus/tarball/master -o 
yetus.tar.gz
- wget 
https://raw.githubusercontent.com/elek/yetus/ozone-personality/precommit/src/main/shell/personality/hadoop.sh
 -O "$WORKSPACE/ozone.sh"
+ curl -L https://api.github.com/repos/apache/yetus/tarball/rel/0.8.0 -o 
yetus.tar.gz
...
- YETUS_ARGS+=("--personality=$WORKSPACE/ozone.sh")
{code}

It is better to make sure that shell script are written using same upper and 
lower case and remove indirect references to prevent shellshock attacks.  
Therefore, the disabled shellcheck SC2034 should be checked.

> Jenkins precommit build is broken for Ozone
> ---
>
> Key: HDDS-1548
> URL: https://issues.apache.org/jira/browse/HDDS-1548
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Blocker
>
> HDDS Jenkins precommit build has been broken since Build 2685 Date May 13, 
> 2019 11:00:40PM.  It looks like the precommit build depends on Yetus trunk.  
> This is extremely risky when Yetus trunk breaks, it also breaks precommit 
> build for Ozone.  Precommit build must use a released version of Yetus to 
> prevent cascaded regression.
> A second problem is the precommit build also depends on Marton's own personal 
> website to download ozone.sh.  It would be best to version control ozone.sh 
> in hadoop-ozone/dev-support directory to prevent unpredictable changes to 
> ozone.sh at different time, which can make precommit build report 
> indeterministic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >